Test Report: QEMU_macOS 18350

                    
                      b07500d1f25ef3b9b4cf5a8c10c74b3642cd60ca:2024-03-11:33512
                    
                

Test fail (98/281)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 40.1
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.14
39 TestAddons/parallel/Ingress 34.53
54 TestCertOptions 10.1
55 TestCertExpiration 195.18
56 TestDockerFlags 10.11
57 TestForceSystemdFlag 10.07
58 TestForceSystemdEnv 10.88
103 TestFunctional/parallel/ServiceCmdConnect 39.2
175 TestMutliControlPlane/serial/StopSecondaryNode 214.12
176 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.67
177 TestMutliControlPlane/serial/RestartSecondaryNode 209.03
179 TestMutliControlPlane/serial/RestartClusterKeepsNodes 234.4
180 TestMutliControlPlane/serial/DeleteSecondaryNode 0.11
181 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.15
182 TestMutliControlPlane/serial/StopCluster 202.08
183 TestMutliControlPlane/serial/RestartCluster 5.26
184 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.11
185 TestMutliControlPlane/serial/AddSecondaryNode 0.08
189 TestImageBuild/serial/Setup 9.89
192 TestJSONOutput/start/Command 9.8
198 TestJSONOutput/pause/Command 0.08
204 TestJSONOutput/unpause/Command 0.05
221 TestMinikubeProfile 10.34
224 TestMountStart/serial/StartWithMountFirst 10.63
227 TestMultiNode/serial/FreshStart2Nodes 9.98
228 TestMultiNode/serial/DeployApp2Nodes 96.46
229 TestMultiNode/serial/PingHostFrom2Pods 0.09
230 TestMultiNode/serial/AddNode 0.07
231 TestMultiNode/serial/MultiNodeLabels 0.06
232 TestMultiNode/serial/ProfileList 0.1
233 TestMultiNode/serial/CopyFile 0.06
234 TestMultiNode/serial/StopNode 0.14
235 TestMultiNode/serial/StartAfterStop 55.61
236 TestMultiNode/serial/RestartKeepsNodes 8.68
237 TestMultiNode/serial/DeleteNode 0.11
238 TestMultiNode/serial/StopMultiNode 3.32
239 TestMultiNode/serial/RestartMultiNode 5.27
240 TestMultiNode/serial/ValidateNameConflict 20.24
244 TestPreload 10.1
246 TestScheduledStopUnix 10.02
247 TestSkaffold 16.58
250 TestRunningBinaryUpgrade 634.96
252 TestKubernetesUpgrade 18.32
266 TestStoppedBinaryUpgrade/Upgrade 637.79
276 TestPause/serial/Start 9.86
279 TestNoKubernetes/serial/StartWithK8s 9.84
280 TestNoKubernetes/serial/StartWithStopK8s 5.9
281 TestNoKubernetes/serial/Start 6.77
285 TestNoKubernetes/serial/StartNoArgs 5.94
286 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.69
288 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.63
289 TestNetworkPlugins/group/auto/Start 9.86
290 TestNetworkPlugins/group/kindnet/Start 10.02
291 TestNetworkPlugins/group/calico/Start 9.88
292 TestNetworkPlugins/group/custom-flannel/Start 9.92
293 TestNetworkPlugins/group/false/Start 9.88
294 TestNetworkPlugins/group/enable-default-cni/Start 9.85
295 TestNetworkPlugins/group/flannel/Start 9.82
296 TestNetworkPlugins/group/bridge/Start 9.91
297 TestNetworkPlugins/group/kubenet/Start 9.91
299 TestStartStop/group/old-k8s-version/serial/FirstStart 9.92
300 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
304 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
308 TestStartStop/group/old-k8s-version/serial/Pause 0.11
310 TestStartStop/group/no-preload/serial/FirstStart 9.82
311 TestStartStop/group/no-preload/serial/DeployApp 0.09
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
315 TestStartStop/group/no-preload/serial/SecondStart 5.26
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/no-preload/serial/Pause 0.11
321 TestStartStop/group/embed-certs/serial/FirstStart 10.01
322 TestStartStop/group/embed-certs/serial/DeployApp 0.09
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
326 TestStartStop/group/embed-certs/serial/SecondStart 5.26
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/embed-certs/serial/Pause 0.11
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.9
334 TestStartStop/group/newest-cni/serial/FirstStart 9.95
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.05
344 TestStartStop/group/newest-cni/serial/SecondStart 5.25
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
352 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (40.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-752000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-752000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (40.096489125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e06fa373-23fb-4995-b498-1e5de41f38ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-752000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c3bf373-916d-46db-8e0f-f959f8abaa64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18350"}}
	{"specversion":"1.0","id":"fb9a123b-3f53-44a0-82dc-7390247bad3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig"}}
	{"specversion":"1.0","id":"fe607d84-5627-44c6-ad2f-2e6358caf3ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"931058f3-a87b-4fe2-ad0b-593de086fc6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a44e8815-1727-4ece-91f0-9a499e51c488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube"}}
	{"specversion":"1.0","id":"3e93400b-a9ad-487e-9723-3631ad9456eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"4c9fb48d-cc54-4001-b66d-3364790c171d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"053c9ee9-7298-408f-bcc1-2329fda3dfa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"179a3397-431b-4758-bb89-84466e2e5896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca645087-3d4a-4eec-80dc-4a23dbf08609","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-752000\" primary control-plane node in \"download-only-752000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"29d7d7df-a3bb-42f9-b900-9d480d02d5ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9316ef5-284a-4279-93ad-dc9464e59ce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18350-986/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040] Decompressors:map[bz2:0x140008955c0 gz:0x140008955c8 tar:0x14000895570 tar.bz2:0x14000895580 tar.gz:0x14000895590 tar.xz:0x140008955a0 tar.zst:0x140008955b0 tbz2:0x14000895580 tgz:0x140
00895590 txz:0x140008955a0 tzst:0x140008955b0 xz:0x140008955d0 zip:0x140008955e0 zst:0x140008955d8] Getters:map[file:0x1400222c560 http:0x140007c8230 https:0x140007c8280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"863b5282-eb5c-4683-93f7-df4959f4de93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 03:34:22.565619    1436 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:34:22.565751    1436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:34:22.565756    1436 out.go:304] Setting ErrFile to fd 2...
	I0311 03:34:22.565759    1436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:34:22.565885    1436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	W0311 03:34:22.565971    1436 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18350-986/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18350-986/.minikube/config/config.json: no such file or directory
	I0311 03:34:22.567165    1436 out.go:298] Setting JSON to true
	I0311 03:34:22.584561    1436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":234,"bootTime":1710153028,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 03:34:22.584622    1436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 03:34:22.589119    1436 out.go:97] [download-only-752000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 03:34:22.593112    1436 out.go:169] MINIKUBE_LOCATION=18350
	I0311 03:34:22.589264    1436 notify.go:220] Checking for updates...
	W0311 03:34:22.589273    1436 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball: no such file or directory
	I0311 03:34:22.602074    1436 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 03:34:22.606112    1436 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 03:34:22.610098    1436 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 03:34:22.613069    1436 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	W0311 03:34:22.619072    1436 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 03:34:22.619313    1436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 03:34:22.624064    1436 out.go:97] Using the qemu2 driver based on user configuration
	I0311 03:34:22.624085    1436 start.go:297] selected driver: qemu2
	I0311 03:34:22.624113    1436 start.go:901] validating driver "qemu2" against <nil>
	I0311 03:34:22.624170    1436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 03:34:22.628078    1436 out.go:169] Automatically selected the socket_vmnet network
	I0311 03:34:22.634716    1436 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0311 03:34:22.634813    1436 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 03:34:22.634888    1436 cni.go:84] Creating CNI manager for ""
	I0311 03:34:22.634906    1436 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 03:34:22.634950    1436 start.go:340] cluster config:
	{Name:download-only-752000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:34:22.640617    1436 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 03:34:22.645090    1436 out.go:97] Downloading VM boot image ...
	I0311 03:34:22.645103    1436 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0311 03:34:40.248378    1436 out.go:97] Starting "download-only-752000" primary control-plane node in "download-only-752000" cluster
	I0311 03:34:40.248398    1436 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 03:34:40.544704    1436 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 03:34:40.544760    1436 cache.go:56] Caching tarball of preloaded images
	I0311 03:34:40.545484    1436 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 03:34:40.551031    1436 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0311 03:34:40.551057    1436 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:34:41.156837    1436 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 03:35:01.530566    1436 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:35:01.530747    1436 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:35:02.228780    1436 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 03:35:02.228986    1436 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/download-only-752000/config.json ...
	I0311 03:35:02.229002    1436 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/download-only-752000/config.json: {Name:mk662d2b0a7da82438161412ea8665a1b408d5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:35:02.229219    1436 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 03:35:02.229406    1436 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0311 03:35:02.582303    1436 out.go:169] 
	W0311 03:35:02.587324    1436 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18350-986/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040] Decompressors:map[bz2:0x140008955c0 gz:0x140008955c8 tar:0x14000895570 tar.bz2:0x14000895580 tar.gz:0x14000895590 tar.xz:0x140008955a0 tar.zst:0x140008955b0 tbz2:0x14000895580 tgz:0x14000895590 txz:0x140008955a0 tzst:0x140008955b0 xz:0x140008955d0 zip:0x140008955e0 zst:0x140008955d8] Getters:map[file:0x1400222c560 http:0x140007c8230 https:0x140007c8280] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0311 03:35:02.587348    1436 out_reason.go:110] 
	W0311 03:35:02.595156    1436 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 03:35:02.599234    1436 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-752000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (40.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18350-986/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-255000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-255000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.949979917s)

                                                
                                                
-- stdout --
	* [offline-docker-255000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-255000" primary control-plane node in "offline-docker-255000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-255000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:14:39.411218    3906 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:14:39.411340    3906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:14:39.411344    3906 out.go:304] Setting ErrFile to fd 2...
	I0311 04:14:39.411353    3906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:14:39.411483    3906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:14:39.412599    3906 out.go:298] Setting JSON to false
	I0311 04:14:39.430332    3906 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2651,"bootTime":1710153028,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:14:39.430399    3906 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:14:39.434273    3906 out.go:177] * [offline-docker-255000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:14:39.445097    3906 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:14:39.440990    3906 notify.go:220] Checking for updates...
	I0311 04:14:39.453042    3906 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:14:39.456173    3906 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:14:39.459127    3906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:14:39.462098    3906 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:14:39.469146    3906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:14:39.470792    3906 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:14:39.470864    3906 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:14:39.475105    3906 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:14:39.481035    3906 start.go:297] selected driver: qemu2
	I0311 04:14:39.481044    3906 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:14:39.481051    3906 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:14:39.483011    3906 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:14:39.486126    3906 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:14:39.489231    3906 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:14:39.489263    3906 cni.go:84] Creating CNI manager for ""
	I0311 04:14:39.489270    3906 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:14:39.489274    3906 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:14:39.489305    3906 start.go:340] cluster config:
	{Name:offline-docker-255000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:14:39.493677    3906 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:39.501110    3906 out.go:177] * Starting "offline-docker-255000" primary control-plane node in "offline-docker-255000" cluster
	I0311 04:14:39.505092    3906 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:14:39.505122    3906 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:14:39.505134    3906 cache.go:56] Caching tarball of preloaded images
	I0311 04:14:39.505206    3906 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:14:39.505212    3906 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:14:39.505279    3906 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/offline-docker-255000/config.json ...
	I0311 04:14:39.505290    3906 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/offline-docker-255000/config.json: {Name:mkb07ed631e8bb47932cc57b73bdb06899ea311c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:14:39.505505    3906 start.go:360] acquireMachinesLock for offline-docker-255000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:14:39.505537    3906 start.go:364] duration metric: took 22.208µs to acquireMachinesLock for "offline-docker-255000"
	I0311 04:14:39.505548    3906 start.go:93] Provisioning new machine with config: &{Name:offline-docker-255000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:14:39.505579    3906 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:14:39.514174    3906 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 04:14:39.529271    3906 start.go:159] libmachine.API.Create for "offline-docker-255000" (driver="qemu2")
	I0311 04:14:39.529305    3906 client.go:168] LocalClient.Create starting
	I0311 04:14:39.529403    3906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:14:39.529436    3906 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:39.529445    3906 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:39.529492    3906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:14:39.529517    3906 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:39.529522    3906 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:39.529875    3906 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:14:39.670195    3906 main.go:141] libmachine: Creating SSH key...
	I0311 04:14:39.780133    3906 main.go:141] libmachine: Creating Disk image...
	I0311 04:14:39.780142    3906 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:14:39.780360    3906 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2
	I0311 04:14:39.792934    3906 main.go:141] libmachine: STDOUT: 
	I0311 04:14:39.792955    3906 main.go:141] libmachine: STDERR: 
	I0311 04:14:39.793006    3906 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2 +20000M
	I0311 04:14:39.804836    3906 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:14:39.804868    3906 main.go:141] libmachine: STDERR: 
	I0311 04:14:39.804886    3906 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2
	I0311 04:14:39.804889    3906 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:14:39.804919    3906 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:6c:59:77:6b:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2
	I0311 04:14:39.806641    3906 main.go:141] libmachine: STDOUT: 
	I0311 04:14:39.806658    3906 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:14:39.806688    3906 client.go:171] duration metric: took 277.387083ms to LocalClient.Create
	I0311 04:14:41.808688    3906 start.go:128] duration metric: took 2.303170875s to createHost
	I0311 04:14:41.808707    3906 start.go:83] releasing machines lock for "offline-docker-255000", held for 2.303234416s
	W0311 04:14:41.808726    3906 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:41.817241    3906 out.go:177] * Deleting "offline-docker-255000" in qemu2 ...
	W0311 04:14:41.826069    3906 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:41.826078    3906 start.go:728] Will try again in 5 seconds ...
	I0311 04:14:46.828175    3906 start.go:360] acquireMachinesLock for offline-docker-255000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:14:46.828643    3906 start.go:364] duration metric: took 350.167µs to acquireMachinesLock for "offline-docker-255000"
	I0311 04:14:46.828787    3906 start.go:93] Provisioning new machine with config: &{Name:offline-docker-255000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:14:46.829071    3906 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:14:46.843866    3906 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 04:14:46.894299    3906 start.go:159] libmachine.API.Create for "offline-docker-255000" (driver="qemu2")
	I0311 04:14:46.894353    3906 client.go:168] LocalClient.Create starting
	I0311 04:14:46.894489    3906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:14:46.894556    3906 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:46.894613    3906 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:46.894684    3906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:14:46.894731    3906 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:46.894746    3906 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:46.895304    3906 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:14:47.053645    3906 main.go:141] libmachine: Creating SSH key...
	I0311 04:14:47.259314    3906 main.go:141] libmachine: Creating Disk image...
	I0311 04:14:47.259322    3906 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:14:47.259514    3906 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2
	I0311 04:14:47.272506    3906 main.go:141] libmachine: STDOUT: 
	I0311 04:14:47.272529    3906 main.go:141] libmachine: STDERR: 
	I0311 04:14:47.272591    3906 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2 +20000M
	I0311 04:14:47.283285    3906 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:14:47.283302    3906 main.go:141] libmachine: STDERR: 
	I0311 04:14:47.283320    3906 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2
	I0311 04:14:47.283324    3906 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:14:47.283359    3906 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:27:61:13:c6:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/offline-docker-255000/disk.qcow2
	I0311 04:14:47.284971    3906 main.go:141] libmachine: STDOUT: 
	I0311 04:14:47.284988    3906 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:14:47.285004    3906 client.go:171] duration metric: took 390.6565ms to LocalClient.Create
	I0311 04:14:49.287134    3906 start.go:128] duration metric: took 2.458108708s to createHost
	I0311 04:14:49.287183    3906 start.go:83] releasing machines lock for "offline-docker-255000", held for 2.458579666s
	W0311 04:14:49.287625    3906 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-255000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-255000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:49.298351    3906 out.go:177] 
	W0311 04:14:49.303263    3906 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:14:49.303298    3906 out.go:239] * 
	* 
	W0311 04:14:49.306345    3906 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:14:49.315279    3906 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-255000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-11 04:14:49.331312 -0700 PDT m=+2426.863968042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-255000 -n offline-docker-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-255000 -n offline-docker-255000: exit status 7 (68.44475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-255000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-255000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-255000
--- FAIL: TestOffline (10.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (34.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-597000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-597000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-597000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a0d7e67c-632a-4cc7-8bce-68c5805f85b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a0d7e67c-632a-4cc7-8bce-68c5805f85b8] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00378925s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-597000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.037480708s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-arm64 -p addons-597000 addons disable ingress --alsologtostderr -v=1: (7.214331417s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-597000 -n addons-597000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| delete  | -p download-only-266000                                                                     | download-only-266000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| start   | -o=json --download-only                                                                     | download-only-861000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT |                     |
	|         | -p download-only-861000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| delete  | -p download-only-861000                                                                     | download-only-861000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| delete  | -p download-only-752000                                                                     | download-only-752000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| delete  | -p download-only-266000                                                                     | download-only-266000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| delete  | -p download-only-861000                                                                     | download-only-861000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-395000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT |                     |
	|         | binary-mirror-395000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49330                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-395000                                                                     | binary-mirror-395000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| addons  | enable dashboard -p                                                                         | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT |                     |
	|         | addons-597000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT |                     |
	|         | addons-597000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-597000 --wait=true                                                                | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:39 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                                                |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                                           |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ip      | addons-597000 ip                                                                            | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:39 PDT | 11 Mar 24 03:39 PDT |
	| addons  | addons-597000 addons disable                                                                | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:39 PDT | 11 Mar 24 03:39 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-597000 addons                                                                        | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:39 PDT | 11 Mar 24 03:39 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:39 PDT | 11 Mar 24 03:39 PDT |
	|         | addons-597000                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-597000 ssh curl -s                                                                   | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:39 PDT | 11 Mar 24 03:39 PDT |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-597000 ip                                                                            | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:39 PDT | 11 Mar 24 03:39 PDT |
	| addons  | addons-597000 addons                                                                        | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:40 PDT | 11 Mar 24 03:40 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-597000 addons                                                                        | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:40 PDT | 11 Mar 24 03:40 PDT |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-597000 addons disable                                                                | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:40 PDT | 11 Mar 24 03:40 PDT |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-597000 addons disable                                                                | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:40 PDT | 11 Mar 24 03:40 PDT |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ssh     | addons-597000 ssh cat                                                                       | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:40 PDT | 11 Mar 24 03:40 PDT |
	|         | /opt/local-path-provisioner/pvc-3949e277-7901-4317-bdda-9cee2a039c24_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-597000 addons disable                                                                | addons-597000        | jenkins | v1.32.0 | 11 Mar 24 03:40 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 03:35:46
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 03:35:46.921541    1617 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:35:46.921680    1617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:35:46.921683    1617 out.go:304] Setting ErrFile to fd 2...
	I0311 03:35:46.921685    1617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:35:46.921840    1617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 03:35:46.922959    1617 out.go:298] Setting JSON to false
	I0311 03:35:46.939020    1617 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":318,"bootTime":1710153028,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 03:35:46.939086    1617 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 03:35:46.943271    1617 out.go:177] * [addons-597000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 03:35:46.950328    1617 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 03:35:46.954323    1617 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 03:35:46.950393    1617 notify.go:220] Checking for updates...
	I0311 03:35:46.960309    1617 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 03:35:46.963304    1617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 03:35:46.966316    1617 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 03:35:46.969334    1617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 03:35:46.970972    1617 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 03:35:46.975281    1617 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 03:35:46.982141    1617 start.go:297] selected driver: qemu2
	I0311 03:35:46.982149    1617 start.go:901] validating driver "qemu2" against <nil>
	I0311 03:35:46.982156    1617 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 03:35:46.984459    1617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 03:35:46.987267    1617 out.go:177] * Automatically selected the socket_vmnet network
	I0311 03:35:46.990397    1617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 03:35:46.990439    1617 cni.go:84] Creating CNI manager for ""
	I0311 03:35:46.990448    1617 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 03:35:46.990453    1617 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 03:35:46.990485    1617 start.go:340] cluster config:
	{Name:addons-597000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-597000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:35:46.994997    1617 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 03:35:47.003284    1617 out.go:177] * Starting "addons-597000" primary control-plane node in "addons-597000" cluster
	I0311 03:35:47.007308    1617 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 03:35:47.007323    1617 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 03:35:47.007334    1617 cache.go:56] Caching tarball of preloaded images
	I0311 03:35:47.007396    1617 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 03:35:47.007403    1617 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 03:35:47.007683    1617 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/config.json ...
	I0311 03:35:47.007694    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/config.json: {Name:mk41f7b28ebcecbfbd225f407bd9db6c69de071e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:35:47.007929    1617 start.go:360] acquireMachinesLock for addons-597000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 03:35:47.008095    1617 start.go:364] duration metric: took 160.292µs to acquireMachinesLock for "addons-597000"
	I0311 03:35:47.008106    1617 start.go:93] Provisioning new machine with config: &{Name:addons-597000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-597000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 03:35:47.008137    1617 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 03:35:47.013373    1617 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0311 03:35:47.245817    1617 start.go:159] libmachine.API.Create for "addons-597000" (driver="qemu2")
	I0311 03:35:47.245849    1617 client.go:168] LocalClient.Create starting
	I0311 03:35:47.246010    1617 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 03:35:47.335624    1617 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 03:35:47.631718    1617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 03:35:48.315474    1617 main.go:141] libmachine: Creating SSH key...
	I0311 03:35:48.392108    1617 main.go:141] libmachine: Creating Disk image...
	I0311 03:35:48.392113    1617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 03:35:48.392323    1617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/disk.qcow2
	I0311 03:35:48.413505    1617 main.go:141] libmachine: STDOUT: 
	I0311 03:35:48.413530    1617 main.go:141] libmachine: STDERR: 
	I0311 03:35:48.413591    1617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/disk.qcow2 +20000M
	I0311 03:35:48.424451    1617 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 03:35:48.424466    1617 main.go:141] libmachine: STDERR: 
	I0311 03:35:48.424479    1617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/disk.qcow2
	I0311 03:35:48.424484    1617 main.go:141] libmachine: Starting QEMU VM...
	I0311 03:35:48.424510    1617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:74:65:aa:02:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/disk.qcow2
	I0311 03:35:48.480705    1617 main.go:141] libmachine: STDOUT: 
	I0311 03:35:48.480734    1617 main.go:141] libmachine: STDERR: 
	I0311 03:35:48.480737    1617 main.go:141] libmachine: Attempt 0
	I0311 03:35:48.480748    1617 main.go:141] libmachine: Searching for 26:74:65:aa:2:22 in /var/db/dhcpd_leases ...
	I0311 03:35:48.480814    1617 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 03:35:48.480838    1617 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f02ed0}
	I0311 03:35:50.482936    1617 main.go:141] libmachine: Attempt 1
	I0311 03:35:50.483014    1617 main.go:141] libmachine: Searching for 26:74:65:aa:2:22 in /var/db/dhcpd_leases ...
	I0311 03:35:50.483230    1617 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 03:35:50.483278    1617 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f02ed0}
	I0311 03:35:52.484375    1617 main.go:141] libmachine: Attempt 2
	I0311 03:35:52.484479    1617 main.go:141] libmachine: Searching for 26:74:65:aa:2:22 in /var/db/dhcpd_leases ...
	I0311 03:35:52.484738    1617 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 03:35:52.484829    1617 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f02ed0}
	I0311 03:35:54.486398    1617 main.go:141] libmachine: Attempt 3
	I0311 03:35:54.486432    1617 main.go:141] libmachine: Searching for 26:74:65:aa:2:22 in /var/db/dhcpd_leases ...
	I0311 03:35:54.486478    1617 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 03:35:54.486515    1617 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f02ed0}
	I0311 03:35:56.488467    1617 main.go:141] libmachine: Attempt 4
	I0311 03:35:56.488480    1617 main.go:141] libmachine: Searching for 26:74:65:aa:2:22 in /var/db/dhcpd_leases ...
	I0311 03:35:56.488541    1617 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 03:35:56.488548    1617 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f02ed0}
	I0311 03:35:58.490497    1617 main.go:141] libmachine: Attempt 5
	I0311 03:35:58.490505    1617 main.go:141] libmachine: Searching for 26:74:65:aa:2:22 in /var/db/dhcpd_leases ...
	I0311 03:35:58.490535    1617 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 03:35:58.490541    1617 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f02ed0}
	I0311 03:36:00.492521    1617 main.go:141] libmachine: Attempt 6
	I0311 03:36:00.492549    1617 main.go:141] libmachine: Searching for 26:74:65:aa:2:22 in /var/db/dhcpd_leases ...
	I0311 03:36:00.492653    1617 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 03:36:00.492664    1617 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f02ed0}
	I0311 03:36:02.494715    1617 main.go:141] libmachine: Attempt 7
	I0311 03:36:02.494830    1617 main.go:141] libmachine: Searching for 26:74:65:aa:2:22 in /var/db/dhcpd_leases ...
	I0311 03:36:02.495108    1617 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0311 03:36:02.495156    1617 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:26:74:65:aa:2:22 ID:1,26:74:65:aa:2:22 Lease:0x65f03011}
	I0311 03:36:02.495171    1617 main.go:141] libmachine: Found match: 26:74:65:aa:2:22
	I0311 03:36:02.495203    1617 main.go:141] libmachine: IP: 192.168.105.2
	I0311 03:36:02.495222    1617 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0311 03:36:05.510561    1617 machine.go:94] provisionDockerMachine start ...
	I0311 03:36:05.511646    1617 main.go:141] libmachine: Using SSH client type: native
	I0311 03:36:05.512030    1617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10069da90] 0x1006a02f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 03:36:05.512048    1617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 03:36:05.568878    1617 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 03:36:05.568905    1617 buildroot.go:166] provisioning hostname "addons-597000"
	I0311 03:36:05.568967    1617 main.go:141] libmachine: Using SSH client type: native
	I0311 03:36:05.569092    1617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10069da90] 0x1006a02f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 03:36:05.569097    1617 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-597000 && echo "addons-597000" | sudo tee /etc/hostname
	I0311 03:36:05.622139    1617 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-597000
	
	I0311 03:36:05.622194    1617 main.go:141] libmachine: Using SSH client type: native
	I0311 03:36:05.622313    1617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10069da90] 0x1006a02f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 03:36:05.622322    1617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-597000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-597000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-597000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 03:36:05.674309    1617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 03:36:05.674324    1617 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18350-986/.minikube CaCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18350-986/.minikube}
	I0311 03:36:05.674334    1617 buildroot.go:174] setting up certificates
	I0311 03:36:05.674339    1617 provision.go:84] configureAuth start
	I0311 03:36:05.674344    1617 provision.go:143] copyHostCerts
	I0311 03:36:05.674439    1617 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem (1082 bytes)
	I0311 03:36:05.674652    1617 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem (1123 bytes)
	I0311 03:36:05.674770    1617 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem (1675 bytes)
	I0311 03:36:05.674871    1617 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem org=jenkins.addons-597000 san=[127.0.0.1 192.168.105.2 addons-597000 localhost minikube]
	I0311 03:36:05.853976    1617 provision.go:177] copyRemoteCerts
	I0311 03:36:05.854056    1617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 03:36:05.854077    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:05.881424    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 03:36:05.890302    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 03:36:05.899038    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 03:36:05.907731    1617 provision.go:87] duration metric: took 233.398542ms to configureAuth
	I0311 03:36:05.907739    1617 buildroot.go:189] setting minikube options for container-runtime
	I0311 03:36:05.907839    1617 config.go:182] Loaded profile config "addons-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 03:36:05.907879    1617 main.go:141] libmachine: Using SSH client type: native
	I0311 03:36:05.908035    1617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10069da90] 0x1006a02f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 03:36:05.908044    1617 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0311 03:36:05.955175    1617 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0311 03:36:05.955185    1617 buildroot.go:70] root file system type: tmpfs
	I0311 03:36:05.955238    1617 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0311 03:36:05.955280    1617 main.go:141] libmachine: Using SSH client type: native
	I0311 03:36:05.955402    1617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10069da90] 0x1006a02f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 03:36:05.955435    1617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0311 03:36:06.007478    1617 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0311 03:36:06.007536    1617 main.go:141] libmachine: Using SSH client type: native
	I0311 03:36:06.007641    1617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10069da90] 0x1006a02f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 03:36:06.007649    1617 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0311 03:36:06.349123    1617 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0311 03:36:06.349136    1617 machine.go:97] duration metric: took 838.596791ms to provisionDockerMachine
	I0311 03:36:06.349142    1617 client.go:171] duration metric: took 19.104198417s to LocalClient.Create
	I0311 03:36:06.349154    1617 start.go:167] duration metric: took 19.104249959s to libmachine.API.Create "addons-597000"
	I0311 03:36:06.349158    1617 start.go:293] postStartSetup for "addons-597000" (driver="qemu2")
	I0311 03:36:06.349164    1617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 03:36:06.349232    1617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 03:36:06.349241    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:06.374839    1617 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 03:36:06.376186    1617 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 03:36:06.376193    1617 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18350-986/.minikube/addons for local assets ...
	I0311 03:36:06.376266    1617 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18350-986/.minikube/files for local assets ...
	I0311 03:36:06.376299    1617 start.go:296] duration metric: took 27.139041ms for postStartSetup
	I0311 03:36:06.376701    1617 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/config.json ...
	I0311 03:36:06.376873    1617 start.go:128] duration metric: took 19.369653583s to createHost
	I0311 03:36:06.376914    1617 main.go:141] libmachine: Using SSH client type: native
	I0311 03:36:06.377003    1617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10069da90] 0x1006a02f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 03:36:06.377007    1617 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 03:36:06.423486    1617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710153366.823228378
	
	I0311 03:36:06.423494    1617 fix.go:216] guest clock: 1710153366.823228378
	I0311 03:36:06.423498    1617 fix.go:229] Guest: 2024-03-11 03:36:06.823228378 -0700 PDT Remote: 2024-03-11 03:36:06.376876 -0700 PDT m=+19.477286626 (delta=446.352378ms)
	I0311 03:36:06.423508    1617 fix.go:200] guest clock delta is within tolerance: 446.352378ms
	I0311 03:36:06.423515    1617 start.go:83] releasing machines lock for "addons-597000", held for 19.416338292s
	I0311 03:36:06.423802    1617 ssh_runner.go:195] Run: cat /version.json
	I0311 03:36:06.423810    1617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 03:36:06.423811    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:06.423843    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:06.575314    1617 ssh_runner.go:195] Run: systemctl --version
	I0311 03:36:06.578710    1617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 03:36:06.581558    1617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 03:36:06.581603    1617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 03:36:06.589954    1617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 03:36:06.589964    1617 start.go:494] detecting cgroup driver to use...
	I0311 03:36:06.590155    1617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 03:36:06.598950    1617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0311 03:36:06.603477    1617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 03:36:06.607606    1617 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 03:36:06.607634    1617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 03:36:06.611568    1617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 03:36:06.615397    1617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 03:36:06.619167    1617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 03:36:06.623124    1617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 03:36:06.626987    1617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 03:36:06.631059    1617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 03:36:06.634525    1617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 03:36:06.638222    1617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 03:36:06.725845    1617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 03:36:06.737655    1617 start.go:494] detecting cgroup driver to use...
	I0311 03:36:06.737714    1617 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0311 03:36:06.743855    1617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 03:36:06.753474    1617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 03:36:06.761811    1617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 03:36:06.767126    1617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 03:36:06.772669    1617 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0311 03:36:06.818489    1617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 03:36:06.824820    1617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 03:36:06.831294    1617 ssh_runner.go:195] Run: which cri-dockerd
	I0311 03:36:06.832685    1617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0311 03:36:06.835827    1617 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0311 03:36:06.841680    1617 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0311 03:36:06.925566    1617 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0311 03:36:07.016450    1617 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0311 03:36:07.016518    1617 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0311 03:36:07.022627    1617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 03:36:07.105691    1617 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 03:36:08.263544    1617 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157883083s)
	I0311 03:36:08.263613    1617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0311 03:36:08.269280    1617 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0311 03:36:08.275805    1617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 03:36:08.281314    1617 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0311 03:36:08.365309    1617 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0311 03:36:08.448609    1617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 03:36:08.531889    1617 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0311 03:36:08.538785    1617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 03:36:08.544413    1617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 03:36:08.637189    1617 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0311 03:36:08.659764    1617 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0311 03:36:08.659845    1617 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0311 03:36:08.662786    1617 start.go:562] Will wait 60s for crictl version
	I0311 03:36:08.662830    1617 ssh_runner.go:195] Run: which crictl
	I0311 03:36:08.664200    1617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 03:36:08.683256    1617 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0311 03:36:08.683328    1617 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 03:36:08.693057    1617 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 03:36:08.708453    1617 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0311 03:36:08.708604    1617 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0311 03:36:08.709953    1617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 03:36:08.714449    1617 kubeadm.go:877] updating cluster {Name:addons-597000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:addons-597000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 03:36:08.714496    1617 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 03:36:08.714539    1617 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 03:36:08.719703    1617 docker.go:685] Got preloaded images: 
	I0311 03:36:08.719715    1617 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0311 03:36:08.719760    1617 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 03:36:08.723539    1617 ssh_runner.go:195] Run: which lz4
	I0311 03:36:08.725054    1617 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 03:36:08.726314    1617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 03:36:08.726330    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357941720 bytes)
	I0311 03:36:10.008677    1617 docker.go:649] duration metric: took 1.283711291s to copy over tarball
	I0311 03:36:10.008738    1617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 03:36:11.081565    1617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.072849125s)
	I0311 03:36:11.081580    1617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 03:36:11.097628    1617 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 03:36:11.101431    1617 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0311 03:36:11.107239    1617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 03:36:11.191385    1617 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 03:36:13.750168    1617 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.558887083s)
	I0311 03:36:13.750266    1617 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 03:36:13.756184    1617 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 03:36:13.756193    1617 cache_images.go:84] Images are preloaded, skipping loading
	I0311 03:36:13.756198    1617 kubeadm.go:928] updating node { 192.168.105.2 8443 v1.28.4 docker true true} ...
	I0311 03:36:13.756265    1617 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-597000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-597000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 03:36:13.756328    1617 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0311 03:36:13.767913    1617 cni.go:84] Creating CNI manager for ""
	I0311 03:36:13.767925    1617 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 03:36:13.767936    1617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 03:36:13.767945    1617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-597000 NodeName:addons-597000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 03:36:13.768012    1617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-597000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 03:36:13.768072    1617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 03:36:13.771671    1617 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 03:36:13.771711    1617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 03:36:13.775238    1617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0311 03:36:13.781195    1617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 03:36:13.786954    1617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0311 03:36:13.792898    1617 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0311 03:36:13.794410    1617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 03:36:13.798743    1617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 03:36:13.885129    1617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 03:36:13.893185    1617 certs.go:68] Setting up /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000 for IP: 192.168.105.2
	I0311 03:36:13.893193    1617 certs.go:194] generating shared ca certs ...
	I0311 03:36:13.893207    1617 certs.go:226] acquiring lock for ca certs: {Name:mk0eff4ed47e91bcbb09c749a04fbf8f2901eda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:13.893392    1617 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key
	I0311 03:36:14.031893    1617 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt ...
	I0311 03:36:14.031914    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt: {Name:mka6bd50a38858ea5f4ed5b9a27873539b1c3441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.032223    1617 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key ...
	I0311 03:36:14.032227    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key: {Name:mk3df15a2c2c61f2b8b66a0ebda321d25369084d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.032340    1617 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key
	I0311 03:36:14.069368    1617 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.crt ...
	I0311 03:36:14.069380    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.crt: {Name:mkc484f8e8d78fddcf87cca4352cd73199381c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.069582    1617 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key ...
	I0311 03:36:14.069586    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key: {Name:mk5bd0be18168d98a2cb61d5faf9aac11a087951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.069714    1617 certs.go:256] generating profile certs ...
	I0311 03:36:14.069762    1617 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.key
	I0311 03:36:14.069784    1617 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt with IP's: []
	I0311 03:36:14.159820    1617 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt ...
	I0311 03:36:14.159825    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: {Name:mk81f3afff164133054adc6d16317546f6607030 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.159986    1617 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.key ...
	I0311 03:36:14.159990    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.key: {Name:mka1363ccd544806885b9b8bd9a73e71b503a891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.160110    1617 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.key.a2454f25
	I0311 03:36:14.160121    1617 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.crt.a2454f25 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0311 03:36:14.220437    1617 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.crt.a2454f25 ...
	I0311 03:36:14.220441    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.crt.a2454f25: {Name:mk132ba9b3af8a06bb510d7c0ad920a03242ff42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.220579    1617 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.key.a2454f25 ...
	I0311 03:36:14.220582    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.key.a2454f25: {Name:mk933d4c5f250052503acdda81ff344378b1966f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.220685    1617 certs.go:381] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.crt.a2454f25 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.crt
	I0311 03:36:14.220874    1617 certs.go:385] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.key.a2454f25 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.key
	I0311 03:36:14.220979    1617 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/proxy-client.key
	I0311 03:36:14.220992    1617 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/proxy-client.crt with IP's: []
	I0311 03:36:14.262060    1617 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/proxy-client.crt ...
	I0311 03:36:14.262064    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/proxy-client.crt: {Name:mk0b66d898be9fb54ca37068067fe64096dbc3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.262207    1617 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/proxy-client.key ...
	I0311 03:36:14.262210    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/proxy-client.key: {Name:mke57cf3ea01c43a54ff67f94fc3d24625957675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:14.262427    1617 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 03:36:14.262455    1617 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem (1082 bytes)
	I0311 03:36:14.262473    1617 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem (1123 bytes)
	I0311 03:36:14.262495    1617 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem (1675 bytes)
	I0311 03:36:14.262845    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 03:36:14.271717    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 03:36:14.279579    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 03:36:14.287575    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 03:36:14.295582    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0311 03:36:14.303699    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 03:36:14.311773    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 03:36:14.319898    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 03:36:14.328300    1617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 03:36:14.336251    1617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 03:36:14.343148    1617 ssh_runner.go:195] Run: openssl version
	I0311 03:36:14.345514    1617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 03:36:14.349275    1617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 03:36:14.351045    1617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0311 03:36:14.351069    1617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 03:36:14.353093    1617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 03:36:14.356659    1617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 03:36:14.358226    1617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 03:36:14.358269    1617 kubeadm.go:391] StartCluster: {Name:addons-597000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:addons-597000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:36:14.358333    1617 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 03:36:14.364102    1617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 03:36:14.367496    1617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 03:36:14.370980    1617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 03:36:14.374676    1617 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 03:36:14.374681    1617 kubeadm.go:156] found existing configuration files:
	
	I0311 03:36:14.374703    1617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 03:36:14.378078    1617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 03:36:14.378105    1617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 03:36:14.381414    1617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 03:36:14.384531    1617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 03:36:14.384554    1617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 03:36:14.387638    1617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 03:36:14.390927    1617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 03:36:14.390950    1617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 03:36:14.394547    1617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 03:36:14.398008    1617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 03:36:14.398037    1617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 03:36:14.401391    1617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 03:36:14.425476    1617 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 03:36:14.427573    1617 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 03:36:14.478294    1617 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 03:36:14.478354    1617 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 03:36:14.478399    1617 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 03:36:14.574580    1617 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 03:36:14.586706    1617 out.go:204]   - Generating certificates and keys ...
	I0311 03:36:14.586743    1617 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 03:36:14.586777    1617 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 03:36:14.637448    1617 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 03:36:14.667572    1617 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 03:36:14.768325    1617 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 03:36:14.858751    1617 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 03:36:15.001274    1617 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 03:36:15.001338    1617 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-597000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0311 03:36:15.177373    1617 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 03:36:15.177467    1617 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-597000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0311 03:36:15.276979    1617 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 03:36:15.415535    1617 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 03:36:15.541755    1617 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 03:36:15.541804    1617 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 03:36:15.864901    1617 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 03:36:15.961630    1617 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 03:36:16.171703    1617 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 03:36:16.220597    1617 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 03:36:16.220795    1617 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 03:36:16.221965    1617 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 03:36:16.226326    1617 out.go:204]   - Booting up control plane ...
	I0311 03:36:16.226386    1617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 03:36:16.226431    1617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 03:36:16.226464    1617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 03:36:16.231590    1617 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 03:36:16.231846    1617 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 03:36:16.231950    1617 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 03:36:16.325056    1617 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 03:36:20.326639    1617 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001923 seconds
	I0311 03:36:20.326698    1617 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 03:36:20.332615    1617 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 03:36:20.841886    1617 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 03:36:20.841989    1617 kubeadm.go:309] [mark-control-plane] Marking the node addons-597000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 03:36:21.347549    1617 kubeadm.go:309] [bootstrap-token] Using token: o18ufw.vzcmd0senmjumfmw
	I0311 03:36:21.356955    1617 out.go:204]   - Configuring RBAC rules ...
	I0311 03:36:21.357013    1617 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 03:36:21.357775    1617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 03:36:21.360462    1617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 03:36:21.361630    1617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 03:36:21.362709    1617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 03:36:21.364308    1617 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 03:36:21.367996    1617 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 03:36:21.547202    1617 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 03:36:21.760528    1617 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 03:36:21.760976    1617 kubeadm.go:309] 
	I0311 03:36:21.761008    1617 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 03:36:21.761011    1617 kubeadm.go:309] 
	I0311 03:36:21.761051    1617 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 03:36:21.761057    1617 kubeadm.go:309] 
	I0311 03:36:21.761071    1617 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 03:36:21.761109    1617 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 03:36:21.761143    1617 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 03:36:21.761146    1617 kubeadm.go:309] 
	I0311 03:36:21.761174    1617 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 03:36:21.761177    1617 kubeadm.go:309] 
	I0311 03:36:21.761206    1617 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 03:36:21.761210    1617 kubeadm.go:309] 
	I0311 03:36:21.761240    1617 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 03:36:21.761287    1617 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 03:36:21.761321    1617 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 03:36:21.761327    1617 kubeadm.go:309] 
	I0311 03:36:21.761371    1617 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 03:36:21.761415    1617 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 03:36:21.761418    1617 kubeadm.go:309] 
	I0311 03:36:21.761465    1617 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o18ufw.vzcmd0senmjumfmw \
	I0311 03:36:21.761519    1617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e \
	I0311 03:36:21.761530    1617 kubeadm.go:309] 	--control-plane 
	I0311 03:36:21.761533    1617 kubeadm.go:309] 
	I0311 03:36:21.761585    1617 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 03:36:21.761591    1617 kubeadm.go:309] 
	I0311 03:36:21.761633    1617 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o18ufw.vzcmd0senmjumfmw \
	I0311 03:36:21.761691    1617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e 
	I0311 03:36:21.761751    1617 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 03:36:21.761759    1617 cni.go:84] Creating CNI manager for ""
	I0311 03:36:21.761767    1617 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 03:36:21.770155    1617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 03:36:21.776282    1617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 03:36:21.783080    1617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 03:36:21.788853    1617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 03:36:21.788913    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:21.788914    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-597000 minikube.k8s.io/updated_at=2024_03_11T03_36_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563 minikube.k8s.io/name=addons-597000 minikube.k8s.io/primary=true
	I0311 03:36:21.857787    1617 ops.go:34] apiserver oom_adj: -16
	I0311 03:36:21.857829    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:22.359903    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:22.859855    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:23.359873    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:23.859845    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:24.359828    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:24.859797    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:25.359307    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:25.859741    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:26.359714    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:26.858028    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:27.359646    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:27.859663    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:28.359586    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:28.859570    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:29.359390    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:29.859512    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:30.359535    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:30.859449    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:31.359507    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:31.859452    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:32.359379    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:32.859412    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:33.359375    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:33.857641    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:34.359274    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:34.859316    1617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 03:36:34.910039    1617 kubeadm.go:1106] duration metric: took 13.1217945s to wait for elevateKubeSystemPrivileges
	W0311 03:36:34.910063    1617 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 03:36:34.910069    1617 kubeadm.go:393] duration metric: took 20.55278s to StartCluster
	I0311 03:36:34.910102    1617 settings.go:142] acquiring lock: {Name:mk914df43a11d01b4609d1cefd86c6d6814b7b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:34.910255    1617 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 03:36:34.910441    1617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/kubeconfig: {Name:mka1b8ea0dffa0092f34d56c8689bdb2eb2631e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:36:34.910662    1617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 03:36:34.910689    1617 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 03:36:34.915559    1617 out.go:177] * Verifying Kubernetes components...
	I0311 03:36:34.910746    1617 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0311 03:36:34.910865    1617 config.go:182] Loaded profile config "addons-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 03:36:34.915665    1617 addons.go:69] Setting yakd=true in profile "addons-597000"
	I0311 03:36:34.924183    1617 addons.go:234] Setting addon yakd=true in "addons-597000"
	I0311 03:36:34.924190    1617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 03:36:34.924211    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.915671    1617 addons.go:69] Setting metrics-server=true in profile "addons-597000"
	I0311 03:36:34.924235    1617 addons.go:234] Setting addon metrics-server=true in "addons-597000"
	I0311 03:36:34.915674    1617 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-597000"
	I0311 03:36:34.924276    1617 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-597000"
	I0311 03:36:34.924285    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.915676    1617 addons.go:69] Setting default-storageclass=true in profile "addons-597000"
	I0311 03:36:34.924351    1617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-597000"
	I0311 03:36:34.915680    1617 addons.go:69] Setting cloud-spanner=true in profile "addons-597000"
	I0311 03:36:34.915682    1617 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-597000"
	I0311 03:36:34.915685    1617 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-597000"
	I0311 03:36:34.915686    1617 addons.go:69] Setting registry=true in profile "addons-597000"
	I0311 03:36:34.915681    1617 addons.go:69] Setting inspektor-gadget=true in profile "addons-597000"
	I0311 03:36:34.915688    1617 addons.go:69] Setting storage-provisioner=true in profile "addons-597000"
	I0311 03:36:34.915688    1617 addons.go:69] Setting volumesnapshots=true in profile "addons-597000"
	I0311 03:36:34.915692    1617 addons.go:69] Setting gcp-auth=true in profile "addons-597000"
	I0311 03:36:34.915709    1617 addons.go:69] Setting ingress-dns=true in profile "addons-597000"
	I0311 03:36:34.915723    1617 addons.go:69] Setting ingress=true in profile "addons-597000"
	I0311 03:36:34.924257    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.924446    1617 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-597000"
	I0311 03:36:34.924458    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.924466    1617 addons.go:234] Setting addon inspektor-gadget=true in "addons-597000"
	I0311 03:36:34.924498    1617 mustload.go:65] Loading cluster: addons-597000
	I0311 03:36:34.924508    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.924522    1617 addons.go:234] Setting addon ingress=true in "addons-597000"
	I0311 03:36:34.924563    1617 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-597000"
	I0311 03:36:34.924565    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.924571    1617 addons.go:234] Setting addon registry=true in "addons-597000"
	I0311 03:36:34.924582    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.924635    1617 retry.go:31] will retry after 1.44753759s: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.924651    1617 addons.go:234] Setting addon storage-provisioner=true in "addons-597000"
	I0311 03:36:34.924664    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.924694    1617 retry.go:31] will retry after 1.19416652s: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.924697    1617 retry.go:31] will retry after 810.939138ms: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.924707    1617 addons.go:234] Setting addon volumesnapshots=true in "addons-597000"
	I0311 03:36:34.924717    1617 config.go:182] Loaded profile config "addons-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 03:36:34.924788    1617 retry.go:31] will retry after 1.049883659s: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.924491    1617 addons.go:234] Setting addon ingress-dns=true in "addons-597000"
	I0311 03:36:34.924808    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.924820    1617 retry.go:31] will retry after 1.392419609s: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.924707    1617 addons.go:234] Setting addon cloud-spanner=true in "addons-597000"
	I0311 03:36:34.924829    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.924867    1617 retry.go:31] will retry after 1.26916085s: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.924900    1617 retry.go:31] will retry after 1.199693264s: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.924722    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.924980    1617 retry.go:31] will retry after 1.049506626s: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.924990    1617 retry.go:31] will retry after 1.218636209s: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.925042    1617 retry.go:31] will retry after 1.113342691s: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.925123    1617 retry.go:31] will retry after 904.339757ms: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.925191    1617 retry.go:31] will retry after 789.900688ms: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:34.929534    1617 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0311 03:36:34.932633    1617 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0311 03:36:34.932641    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0311 03:36:34.932649    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:34.934575    1617 addons.go:234] Setting addon default-storageclass=true in "addons-597000"
	I0311 03:36:34.934594    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:34.935286    1617 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 03:36:34.935292    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 03:36:34.935298    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:34.982897    1617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 03:36:35.042500    1617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 03:36:35.087492    1617 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0311 03:36:35.087503    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0311 03:36:35.120934    1617 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0311 03:36:35.120947    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0311 03:36:35.129938    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 03:36:35.160865    1617 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0311 03:36:35.160879    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0311 03:36:35.185855    1617 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0311 03:36:35.185867    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0311 03:36:35.221166    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0311 03:36:35.665200    1617 start.go:948] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0311 03:36:35.665612    1617 node_ready.go:35] waiting up to 6m0s for node "addons-597000" to be "Ready" ...
	I0311 03:36:35.675835    1617 node_ready.go:49] node "addons-597000" has status "Ready":"True"
	I0311 03:36:35.675855    1617 node_ready.go:38] duration metric: took 10.224167ms for node "addons-597000" to be "Ready" ...
	I0311 03:36:35.675860    1617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 03:36:35.685902    1617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-848xf" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:35.723599    1617 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0311 03:36:35.727635    1617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0311 03:36:35.727649    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0311 03:36:35.727659    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:35.741624    1617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0311 03:36:35.754170    1617 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0311 03:36:35.762549    1617 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0311 03:36:35.773542    1617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0311 03:36:35.780585    1617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0311 03:36:35.778231    1617 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0311 03:36:35.783833    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0311 03:36:35.787489    1617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0311 03:36:35.793516    1617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0311 03:36:35.790856    1617 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0311 03:36:35.801114    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0311 03:36:35.808260    1617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0311 03:36:35.815238    1617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0311 03:36:35.815246    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0311 03:36:35.810508    1617 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0311 03:36:35.815256    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:35.815263    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0311 03:36:35.833496    1617 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-597000 service yakd-dashboard -n yakd-dashboard
	
	I0311 03:36:35.825931    1617 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0311 03:36:35.840533    1617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0311 03:36:35.833509    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0311 03:36:35.847517    1617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 03:36:35.854401    1617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 03:36:35.857754    1617 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0311 03:36:35.859601    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0311 03:36:35.858564    1617 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 03:36:35.859630    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0311 03:36:35.859633    1617 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 03:36:35.859637    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0311 03:36:35.859644    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:35.871840    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 03:36:35.872751    1617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0311 03:36:35.872757    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0311 03:36:35.879956    1617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0311 03:36:35.879968    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0311 03:36:35.887484    1617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0311 03:36:35.887496    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0311 03:36:35.896178    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 03:36:35.911419    1617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0311 03:36:35.911431    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0311 03:36:35.932700    1617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0311 03:36:35.932712    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0311 03:36:35.976589    1617 retry.go:31] will retry after 939.994945ms: connect: dial unix /Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/monitor: connect: connection refused
	I0311 03:36:35.976821    1617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0311 03:36:35.976828    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0311 03:36:35.977145    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:36.004143    1617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0311 03:36:36.004153    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0311 03:36:36.010350    1617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0311 03:36:36.010358    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0311 03:36:36.016595    1617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 03:36:36.016603    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0311 03:36:36.022939    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 03:36:36.045558    1617 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0311 03:36:36.049532    1617 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0311 03:36:36.049547    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0311 03:36:36.049566    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:36.121637    1617 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-597000"
	I0311 03:36:36.121676    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:36.126450    1617 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0311 03:36:36.129451    1617 out.go:177]   - Using image docker.io/busybox:stable
	I0311 03:36:36.133585    1617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 03:36:36.133597    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0311 03:36:36.133607    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:36.138458    1617 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0311 03:36:36.142592    1617 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0311 03:36:36.142603    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0311 03:36:36.142612    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:36.147488    1617 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0311 03:36:36.153477    1617 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 03:36:36.153487    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0311 03:36:36.153495    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:36.177881    1617 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-597000" context rescaled to 1 replicas
	I0311 03:36:36.205512    1617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 03:36:36.208558    1617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 03:36:36.208566    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 03:36:36.208575    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:36.220386    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0311 03:36:36.225037    1617 pod_ready.go:92] pod "coredns-5dd5756b68-848xf" in "kube-system" namespace has status "Ready":"True"
	I0311 03:36:36.225047    1617 pod_ready.go:81] duration metric: took 539.16075ms for pod "coredns-5dd5756b68-848xf" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.225053    1617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-s6d4v" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.238159    1617 pod_ready.go:92] pod "coredns-5dd5756b68-s6d4v" in "kube-system" namespace has status "Ready":"True"
	I0311 03:36:36.238170    1617 pod_ready.go:81] duration metric: took 13.113959ms for pod "coredns-5dd5756b68-s6d4v" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.238175    1617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-597000" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.254917    1617 pod_ready.go:92] pod "etcd-addons-597000" in "kube-system" namespace has status "Ready":"True"
	I0311 03:36:36.254928    1617 pod_ready.go:81] duration metric: took 16.751208ms for pod "etcd-addons-597000" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.254934    1617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-597000" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.270582    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 03:36:36.274262    1617 pod_ready.go:92] pod "kube-apiserver-addons-597000" in "kube-system" namespace has status "Ready":"True"
	I0311 03:36:36.274272    1617 pod_ready.go:81] duration metric: took 19.335667ms for pod "kube-apiserver-addons-597000" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.274278    1617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-597000" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.314958    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 03:36:36.323258    1617 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0311 03:36:36.329604    1617 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 03:36:36.329616    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0311 03:36:36.329627    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	W0311 03:36:36.329909    1617 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 03:36:36.329926    1617 retry.go:31] will retry after 139.799701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 03:36:36.354235    1617 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0311 03:36:36.354247    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0311 03:36:36.378992    1617 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0311 03:36:36.384086    1617 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 03:36:36.384139    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 03:36:36.384153    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:36.390526    1617 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0311 03:36:36.390538    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0311 03:36:36.430110    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 03:36:36.445938    1617 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0311 03:36:36.445951    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0311 03:36:36.469032    1617 pod_ready.go:92] pod "kube-controller-manager-addons-597000" in "kube-system" namespace has status "Ready":"True"
	I0311 03:36:36.469046    1617 pod_ready.go:81] duration metric: took 194.773708ms for pod "kube-controller-manager-addons-597000" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.469051    1617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w945h" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.469913    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 03:36:36.504844    1617 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0311 03:36:36.504857    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0311 03:36:36.550273    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 03:36:36.568776    1617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 03:36:36.568786    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0311 03:36:36.568818    1617 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0311 03:36:36.568822    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0311 03:36:36.643821    1617 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0311 03:36:36.643835    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0311 03:36:36.644130    1617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 03:36:36.644135    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 03:36:36.752645    1617 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 03:36:36.752657    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0311 03:36:36.762584    1617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 03:36:36.762597    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 03:36:36.791553    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 03:36:36.809192    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 03:36:36.883440    1617 pod_ready.go:92] pod "kube-proxy-w945h" in "kube-system" namespace has status "Ready":"True"
	I0311 03:36:36.883454    1617 pod_ready.go:81] duration metric: took 414.418917ms for pod "kube-proxy-w945h" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.883459    1617 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-597000" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:36.922447    1617 out.go:177]   - Using image docker.io/registry:2.8.3
	I0311 03:36:36.925554    1617 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0311 03:36:36.929541    1617 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0311 03:36:36.929551    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0311 03:36:36.929561    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:37.269515    1617 pod_ready.go:92] pod "kube-scheduler-addons-597000" in "kube-system" namespace has status "Ready":"True"
	I0311 03:36:37.269529    1617 pod_ready.go:81] duration metric: took 386.085042ms for pod "kube-scheduler-addons-597000" in "kube-system" namespace to be "Ready" ...
	I0311 03:36:37.269533    1617 pod_ready.go:38] duration metric: took 1.593743791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 03:36:37.269543    1617 api_server.go:52] waiting for apiserver process to appear ...
	I0311 03:36:37.269612    1617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 03:36:37.299830    1617 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0311 03:36:37.299843    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0311 03:36:37.388177    1617 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0311 03:36:37.388187    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0311 03:36:37.448280    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0311 03:36:39.105238    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.209192625s)
	I0311 03:36:39.105271    1617 addons.go:470] Verifying addon ingress=true in "addons-597000"
	I0311 03:36:39.184414    1617 out.go:177] * Verifying ingress addon...
	I0311 03:36:39.192727    1617 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0311 03:36:39.196404    1617 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0311 03:36:39.196413    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:39.562698    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.539902833s)
	I0311 03:36:39.562718    1617 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-597000"
	I0311 03:36:39.562725    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.292289667s)
	I0311 03:36:39.562705    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.34246475s)
	I0311 03:36:39.562876    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.248053667s)
	I0311 03:36:39.562890    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.132918333s)
	I0311 03:36:39.562922    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.093145083s)
	I0311 03:36:39.562938    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.012792417s)
	I0311 03:36:39.562975    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.7715395s)
	I0311 03:36:39.562995    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.753921417s)
	I0311 03:36:39.567247    1617 addons.go:470] Verifying addon metrics-server=true in "addons-597000"
	I0311 03:36:39.563002    1617 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.293492209s)
	I0311 03:36:39.567257    1617 api_server.go:72] duration metric: took 4.656778209s to wait for apiserver process to appear ...
	I0311 03:36:39.567261    1617 api_server.go:88] waiting for apiserver healthz status ...
	I0311 03:36:39.567269    1617 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0311 03:36:39.563011    1617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.114823125s)
	I0311 03:36:39.567278    1617 addons.go:470] Verifying addon registry=true in "addons-597000"
	I0311 03:36:39.581491    1617 out.go:177] * Verifying registry addon...
	I0311 03:36:39.567194    1617 out.go:177] * Verifying csi-hostpath-driver addon...
	I0311 03:36:39.570274    1617 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0311 03:36:39.585820    1617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0311 03:36:39.586636    1617 api_server.go:141] control plane version: v1.28.4
	I0311 03:36:39.591442    1617 api_server.go:131] duration metric: took 24.176125ms to wait for apiserver health ...
	I0311 03:36:39.591451    1617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 03:36:39.591767    1617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0311 03:36:39.600123    1617 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0311 03:36:39.600132    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:39.601185    1617 system_pods.go:59] 18 kube-system pods found
	I0311 03:36:39.601193    1617 system_pods.go:61] "coredns-5dd5756b68-848xf" [a6722c75-2149-49f3-ad16-52705ee40566] Running
	I0311 03:36:39.601196    1617 system_pods.go:61] "coredns-5dd5756b68-s6d4v" [11c7c905-aec9-4add-a656-c35ecfe51d95] Running
	I0311 03:36:39.601200    1617 system_pods.go:61] "csi-hostpath-attacher-0" [0aa77968-09d6-4ba9-a5a7-843197d00f2c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 03:36:39.601202    1617 system_pods.go:61] "csi-hostpath-resizer-0" [7e8cc1c1-692c-4424-9236-16b1bae0ea87] Pending
	I0311 03:36:39.601205    1617 system_pods.go:61] "csi-hostpathplugin-jzg2d" [b6d435d8-ae70-420a-87d3-e2c6fd3bf5ec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 03:36:39.601208    1617 system_pods.go:61] "etcd-addons-597000" [2199af7d-cd11-4a30-b48d-c6ee79735f29] Running
	I0311 03:36:39.601210    1617 system_pods.go:61] "kube-apiserver-addons-597000" [9241ddef-7677-4979-b48c-ee92087b5af6] Running
	I0311 03:36:39.601212    1617 system_pods.go:61] "kube-controller-manager-addons-597000" [6154c323-d902-4cf4-809e-560e73935c81] Running
	I0311 03:36:39.601216    1617 system_pods.go:61] "kube-ingress-dns-minikube" [cc9d9a7c-e77b-4e8b-bae3-38a181a66f72] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 03:36:39.601218    1617 system_pods.go:61] "kube-proxy-w945h" [0cb6eaea-caf6-4d72-9278-160bd5afc698] Running
	I0311 03:36:39.601220    1617 system_pods.go:61] "kube-scheduler-addons-597000" [e2602c65-6113-42c4-afad-c7896b473445] Running
	I0311 03:36:39.601223    1617 system_pods.go:61] "metrics-server-69cf46c98-jx5gp" [20084289-c494-4cb3-9522-626c08a8c482] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 03:36:39.601226    1617 system_pods.go:61] "nvidia-device-plugin-daemonset-74ckq" [71bbf97d-2702-49fb-9d63-7235962d84c3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0311 03:36:39.601238    1617 system_pods.go:61] "registry-fb4xz" [ad331ad1-6a0f-4327-8b07-0646fb1a581c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0311 03:36:39.601241    1617 system_pods.go:61] "registry-proxy-8mg7f" [1c8aa6c3-cb12-49ed-90b8-55a73f146bf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 03:36:39.601245    1617 system_pods.go:61] "snapshot-controller-58dbcc7b99-77cff" [fab40bc6-7fb2-4558-8f72-330de9013680] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 03:36:39.601248    1617 system_pods.go:61] "snapshot-controller-58dbcc7b99-cz2pb" [7ef69b89-520e-48dc-a326-0d41cb1fbbca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 03:36:39.601251    1617 system_pods.go:61] "storage-provisioner" [14866de2-7a11-403e-871d-44178186abf2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 03:36:39.601253    1617 system_pods.go:74] duration metric: took 9.800125ms to wait for pod list to return data ...
	I0311 03:36:39.601258    1617 default_sa.go:34] waiting for default service account to be created ...
	I0311 03:36:39.603599    1617 default_sa.go:45] found service account: "default"
	I0311 03:36:39.603607    1617 default_sa.go:55] duration metric: took 2.346083ms for default service account to be created ...
	I0311 03:36:39.603610    1617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 03:36:39.604007    1617 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0311 03:36:39.604012    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:39.609569    1617 system_pods.go:86] 18 kube-system pods found
	I0311 03:36:39.609580    1617 system_pods.go:89] "coredns-5dd5756b68-848xf" [a6722c75-2149-49f3-ad16-52705ee40566] Running
	I0311 03:36:39.609584    1617 system_pods.go:89] "coredns-5dd5756b68-s6d4v" [11c7c905-aec9-4add-a656-c35ecfe51d95] Running
	I0311 03:36:39.609587    1617 system_pods.go:89] "csi-hostpath-attacher-0" [0aa77968-09d6-4ba9-a5a7-843197d00f2c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 03:36:39.609590    1617 system_pods.go:89] "csi-hostpath-resizer-0" [7e8cc1c1-692c-4424-9236-16b1bae0ea87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 03:36:39.609594    1617 system_pods.go:89] "csi-hostpathplugin-jzg2d" [b6d435d8-ae70-420a-87d3-e2c6fd3bf5ec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 03:36:39.609597    1617 system_pods.go:89] "etcd-addons-597000" [2199af7d-cd11-4a30-b48d-c6ee79735f29] Running
	I0311 03:36:39.609598    1617 system_pods.go:89] "kube-apiserver-addons-597000" [9241ddef-7677-4979-b48c-ee92087b5af6] Running
	I0311 03:36:39.609600    1617 system_pods.go:89] "kube-controller-manager-addons-597000" [6154c323-d902-4cf4-809e-560e73935c81] Running
	I0311 03:36:39.609603    1617 system_pods.go:89] "kube-ingress-dns-minikube" [cc9d9a7c-e77b-4e8b-bae3-38a181a66f72] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 03:36:39.609604    1617 system_pods.go:89] "kube-proxy-w945h" [0cb6eaea-caf6-4d72-9278-160bd5afc698] Running
	I0311 03:36:39.609606    1617 system_pods.go:89] "kube-scheduler-addons-597000" [e2602c65-6113-42c4-afad-c7896b473445] Running
	I0311 03:36:39.609608    1617 system_pods.go:89] "metrics-server-69cf46c98-jx5gp" [20084289-c494-4cb3-9522-626c08a8c482] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 03:36:39.609613    1617 system_pods.go:89] "nvidia-device-plugin-daemonset-74ckq" [71bbf97d-2702-49fb-9d63-7235962d84c3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0311 03:36:39.609616    1617 system_pods.go:89] "registry-fb4xz" [ad331ad1-6a0f-4327-8b07-0646fb1a581c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0311 03:36:39.609618    1617 system_pods.go:89] "registry-proxy-8mg7f" [1c8aa6c3-cb12-49ed-90b8-55a73f146bf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 03:36:39.609621    1617 system_pods.go:89] "snapshot-controller-58dbcc7b99-77cff" [fab40bc6-7fb2-4558-8f72-330de9013680] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 03:36:39.609623    1617 system_pods.go:89] "snapshot-controller-58dbcc7b99-cz2pb" [7ef69b89-520e-48dc-a326-0d41cb1fbbca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 03:36:39.609627    1617 system_pods.go:89] "storage-provisioner" [14866de2-7a11-403e-871d-44178186abf2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 03:36:39.609630    1617 system_pods.go:126] duration metric: took 6.017417ms to wait for k8s-apps to be running ...
	I0311 03:36:39.609635    1617 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 03:36:39.609892    1617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 03:36:39.618051    1617 system_svc.go:56] duration metric: took 8.409791ms WaitForService to wait for kubelet
	I0311 03:36:39.618068    1617 kubeadm.go:576] duration metric: took 4.707591084s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 03:36:39.618081    1617 node_conditions.go:102] verifying NodePressure condition ...
	I0311 03:36:39.619907    1617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 03:36:39.619916    1617 node_conditions.go:123] node cpu capacity is 2
	I0311 03:36:39.619922    1617 node_conditions.go:105] duration metric: took 1.838917ms to run NodePressure ...
	I0311 03:36:39.619930    1617 start.go:240] waiting for startup goroutines ...
	I0311 03:36:39.696994    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:40.097160    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:40.097398    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:40.196741    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:40.594404    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:40.594667    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:40.697133    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:41.096519    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:41.096533    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:41.196686    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:41.595683    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:41.595912    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:41.696873    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:42.097026    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:42.097186    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:42.196602    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:42.596166    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:42.596417    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:42.696694    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:43.095508    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:43.095854    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:43.181321    1617 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0311 03:36:43.181336    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:43.195714    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:43.212627    1617 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0311 03:36:43.218535    1617 addons.go:234] Setting addon gcp-auth=true in "addons-597000"
	I0311 03:36:43.218560    1617 host.go:66] Checking if "addons-597000" exists ...
	I0311 03:36:43.219769    1617 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0311 03:36:43.219776    1617 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/addons-597000/id_rsa Username:docker}
	I0311 03:36:43.247248    1617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 03:36:43.253231    1617 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0311 03:36:43.256176    1617 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0311 03:36:43.256181    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0311 03:36:43.261887    1617 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0311 03:36:43.261892    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0311 03:36:43.267577    1617 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 03:36:43.267582    1617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0311 03:36:43.273149    1617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 03:36:43.596576    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:43.596795    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:43.648890    1617 addons.go:470] Verifying addon gcp-auth=true in "addons-597000"
	I0311 03:36:43.653569    1617 out.go:177] * Verifying gcp-auth addon...
	I0311 03:36:43.661029    1617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0311 03:36:43.666412    1617 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0311 03:36:43.666421    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:43.697165    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:44.097887    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:44.098150    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:44.164954    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:44.196778    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:44.595072    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:44.595077    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:44.663117    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:44.696430    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:45.096095    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:45.096347    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:45.163054    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:45.196543    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:45.597678    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:45.597864    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:45.665648    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:45.696526    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:46.096029    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:46.096158    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:46.164698    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:46.196075    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:46.596648    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:46.597008    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:46.664743    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:46.696745    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:47.096120    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:47.096564    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:47.164554    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:47.196608    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:47.597192    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:47.597412    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:47.664799    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:47.696383    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:48.096815    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:48.097071    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:48.164661    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:48.196226    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:48.600400    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:48.621718    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:48.663999    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:48.696354    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:49.095392    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:49.095624    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:49.164185    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:49.196068    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:49.596083    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:49.596298    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:49.664187    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:49.695969    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:50.095422    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:50.095464    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:50.164321    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:50.195892    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:50.595412    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:50.596477    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:50.664205    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:50.696721    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:51.093555    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:51.093840    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:51.164172    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:51.196010    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:51.595165    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:51.595704    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:51.664203    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:51.695951    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:52.095599    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:52.095813    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:52.164321    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:52.196038    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:52.596136    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:52.596492    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:52.664242    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:52.696520    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:53.095589    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:53.095833    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:53.164284    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:53.196297    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:53.594539    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:53.594756    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:53.663984    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:53.695932    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:54.095349    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:54.095596    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:54.165388    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:54.196846    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:54.595267    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:54.595822    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:54.664331    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:54.695737    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:55.096008    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:55.096323    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:55.163835    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:55.195857    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:55.595514    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:55.595743    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:55.664240    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:55.695775    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:56.095427    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:56.095726    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:56.164928    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:56.194853    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:56.596436    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:56.596580    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:56.664130    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:56.695804    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:57.096025    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:57.096228    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:57.164143    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:57.195926    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:57.595445    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:57.595670    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:57.662181    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:57.695836    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:58.095157    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:58.095720    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:58.163927    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:58.196191    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:58.595984    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:58.596098    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:58.663901    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:58.695742    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:59.096562    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:59.096656    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:59.164112    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:59.195817    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:36:59.595345    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:36:59.595566    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:36:59.663979    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:36:59.695698    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:00.098743    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:00.098978    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:00.164429    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:00.196208    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:00.595323    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:00.595629    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:00.664056    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:00.695730    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:01.096125    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:01.096225    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:01.162179    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:01.195810    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:01.595893    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:01.596146    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:01.664020    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:01.695606    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:02.096032    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:02.096334    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:02.163216    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:02.195856    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:02.594533    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:02.594875    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:02.663524    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:02.695516    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:03.096763    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:03.097061    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:03.163969    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:03.195262    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:03.595256    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:03.595258    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:03.663565    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:03.695336    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:04.095213    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:04.095496    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:04.163686    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:04.195462    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:04.594929    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:04.595202    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:04.663061    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:04.695247    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:05.094719    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:05.094979    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:05.163666    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:05.195600    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:05.594696    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:05.594897    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:05.662480    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:05.693686    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:06.094415    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:06.095440    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:06.163968    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:06.195688    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:06.595043    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:06.595295    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:06.663537    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:06.695649    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:07.095370    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:07.095536    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:07.162273    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:07.196095    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:07.596054    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:07.596378    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:07.661686    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:07.695608    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:08.093276    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:08.093533    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:08.163662    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:08.193698    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:08.595522    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:08.595711    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:08.662566    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:08.695653    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:09.094502    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:09.094790    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:09.164114    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:09.195202    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:09.594757    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:09.595242    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:09.663491    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:09.695069    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:10.094995    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:10.095392    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:10.163695    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:10.195216    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:10.594593    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:10.594840    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:10.663197    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:10.694882    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:11.094184    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:11.094892    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:11.162229    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:11.195017    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:11.594831    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:11.595030    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:11.663674    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:11.695325    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:12.269269    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:12.271122    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:12.271191    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:12.271199    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:12.594431    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:12.595146    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:12.663524    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:12.695309    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:13.099400    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:13.099626    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:13.212103    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:13.212286    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:13.594369    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:13.594539    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:13.663008    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:13.694847    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:14.094073    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:14.095201    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:14.162527    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:14.196754    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:14.594622    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:14.594842    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:14.663568    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:14.694792    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:15.094369    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:15.094613    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:15.166549    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:15.195176    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:15.594459    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:15.594860    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:15.663417    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:15.693570    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:16.094507    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:16.094728    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:16.163167    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:16.197263    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:16.594119    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:16.594331    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:16.663113    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:16.694764    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:17.094300    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:17.094623    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:17.163458    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:17.194971    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:17.594973    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:17.595208    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:17.663100    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:17.694968    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:18.094504    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:18.095393    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:18.163070    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:18.197042    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:18.595344    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:18.595583    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:18.662606    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:18.695075    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:19.093706    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:19.094566    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:19.163602    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:19.195764    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:19.594983    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:19.595205    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:19.663199    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:19.692946    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:20.093899    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:20.094390    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:20.162829    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:20.194700    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:20.594310    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:20.599717    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:20.661280    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:20.694405    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:21.094257    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:21.095145    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:21.163021    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:21.194893    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:21.592918    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:21.593109    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:21.662785    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:21.694567    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:22.094008    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:22.094193    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:22.163291    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:22.194513    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:22.594391    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:22.594734    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:22.663112    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:22.694960    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:23.094323    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:23.094539    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:23.162701    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:23.194559    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:23.592512    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:23.592706    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:23.662536    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:23.694465    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:24.094032    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:24.094216    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:24.162637    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:24.194375    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:24.594262    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:24.595221    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:24.663066    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:24.695149    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:25.094399    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:25.094862    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:25.162978    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:25.193961    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:25.594643    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:25.594859    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:25.662773    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:25.694588    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:26.094594    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:26.094681    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:26.163078    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:26.195174    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:26.594360    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:26.594723    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:26.661093    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:26.694638    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:27.093778    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:27.095127    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:27.162955    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:27.194514    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:27.593888    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:27.594561    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:27.662445    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:27.694456    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:28.093820    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:28.094804    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:28.162394    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:28.194422    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:28.593640    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:28.594157    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:28.663780    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:28.694536    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:29.093643    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:29.094709    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:29.162295    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:29.194260    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:29.594070    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:29.594210    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:29.662380    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:29.692713    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:30.093248    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:30.093659    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:30.160923    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:30.192510    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:30.593445    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:30.594496    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:30.662300    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:30.694118    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:31.092802    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:31.092927    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:31.161103    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:31.194429    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:31.591151    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:31.591896    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:31.662478    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:31.694029    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:32.093604    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:32.094235    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:32.162972    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:32.194578    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:32.593654    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:32.594778    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:32.662858    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:32.693968    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:33.094451    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:33.094677    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:33.162879    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:33.194417    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:33.593711    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:33.594559    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:33.661052    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:33.694461    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:34.094366    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:34.094608    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:34.162670    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:34.194199    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:34.592261    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:34.592523    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:34.662372    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:34.694330    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:35.093693    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:35.093879    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:35.162463    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:35.194193    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:35.593258    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:35.593945    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:35.664659    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:35.694461    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:36.101743    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 03:37:36.101971    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:36.162480    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:36.194121    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:36.592851    1617 kapi.go:107] duration metric: took 57.009741834s to wait for kubernetes.io/minikube-addons=registry ...
	I0311 03:37:36.593557    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:36.661063    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:36.693869    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:37.093572    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:37.164299    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:37.193857    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:37.594593    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:37.662206    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:37.693839    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:38.093714    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:38.161836    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:38.193816    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:38.593695    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:38.661342    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:38.693947    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:39.092380    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:39.161967    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:39.194065    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:39.593674    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:39.662035    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:39.693850    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:40.093785    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:40.162474    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:40.193702    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:40.594159    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:40.662096    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:40.698747    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:41.093698    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:41.161809    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:41.193560    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:41.593115    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:41.661800    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:41.693866    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:42.093620    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:42.161838    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:42.193634    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:42.593739    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:42.661658    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:42.693478    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:43.094144    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:43.161364    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:43.193715    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:43.593552    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:43.661635    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:43.692970    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:44.093874    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:44.162161    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:44.193496    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:44.593531    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:44.661997    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:44.693651    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:45.093656    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:45.163569    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:45.193581    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:45.592795    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:45.661766    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:45.693599    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:46.093992    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:46.161636    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:46.193359    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:46.592344    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:46.661754    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:46.693266    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:47.092464    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:47.161462    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:47.193432    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:47.593291    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:47.661577    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:47.692994    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:48.093400    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:48.161773    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:48.191296    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:48.593400    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:48.661780    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:48.691498    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:49.093693    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:49.161650    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:49.192198    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:49.593268    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:49.661565    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:49.693642    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:50.093279    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:50.161381    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:50.193263    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:50.593550    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:50.661426    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:50.691098    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:51.093410    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:51.161428    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:51.192535    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:51.593376    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:51.661382    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:51.693158    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:52.093622    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:52.161319    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:52.191671    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:52.593226    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:52.661547    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:52.693374    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:53.092947    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:53.161418    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:53.192843    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:53.593828    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:53.661930    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:53.693163    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:54.093076    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:54.161341    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:54.193369    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:54.592799    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:54.661066    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:54.692963    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:55.095068    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:55.161197    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:55.192914    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:55.593187    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:55.661549    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:55.692634    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:56.093212    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:56.161351    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:56.193056    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:56.592597    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:56.660800    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:56.693179    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:57.094152    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:57.161055    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:57.192994    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:57.592790    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:57.661137    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:57.693326    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:58.092976    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:58.161155    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:58.192788    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:58.592960    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:58.661222    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:58.693039    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:59.092888    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:59.161005    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:59.192912    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:37:59.592961    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:37:59.661001    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:37:59.693102    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:00.092812    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:00.161387    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:00.192952    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:00.592611    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:00.661316    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:00.693591    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:01.092989    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:01.161263    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:01.193348    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:01.592779    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:01.661263    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:01.693074    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:02.091909    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:02.161143    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:02.192874    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:02.591001    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:02.661049    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:02.693019    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:03.092644    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:03.159308    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:03.192240    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:03.592578    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:03.661332    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:03.693683    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:04.126274    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:04.165287    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:04.196765    1617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 03:38:04.592797    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:04.660899    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:04.692535    1617 kapi.go:107] duration metric: took 1m25.50387475s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0311 03:38:05.092982    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:05.160688    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:05.592450    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:05.661042    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:06.093511    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:06.160960    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:06.592895    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:06.660926    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:07.092479    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:07.160838    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:07.592643    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:07.685133    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:08.093237    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:08.160791    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:08.592340    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:08.660885    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:09.091868    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:09.159785    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:09.592476    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:09.660802    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:10.092608    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:10.160476    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:10.592533    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:10.660566    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:11.093504    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:11.160534    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:11.592343    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:11.659339    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:12.092466    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:12.160515    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:12.592197    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:12.660093    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:13.091987    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:13.160253    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:13.590897    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:13.660313    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:14.092137    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:14.160404    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:14.591496    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:14.659495    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:15.093328    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:15.161224    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:15.591479    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:15.660799    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:16.092875    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:16.160820    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:16.592227    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:16.660172    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:17.092640    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:17.160161    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:17.591995    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:17.660067    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:18.092474    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:18.160175    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:18.592343    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:18.660728    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:19.092265    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:19.160206    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:19.592028    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:19.660030    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:20.091679    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:20.160363    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:20.591721    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:20.660224    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:21.091687    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:21.158549    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:21.592117    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:21.659923    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:22.092035    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:22.159879    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:22.592020    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:22.660239    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:23.091992    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:23.159885    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:23.591898    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:23.659369    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:24.091568    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:24.159756    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:24.590700    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:24.659787    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:25.091714    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:25.159880    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:25.623900    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 03:38:25.658524    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:26.090432    1617 kapi.go:107] duration metric: took 1m46.503724708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0311 03:38:26.159926    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:26.659479    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:27.160176    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:27.660120    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:28.159733    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:28.660107    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:29.159692    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:29.659909    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:30.159808    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:30.660042    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:31.159704    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:31.660150    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:32.159881    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:32.659875    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:33.159689    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:33.659690    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:34.159465    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:34.657772    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:35.159580    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:35.659343    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:36.158480    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:36.659714    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:37.159600    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:37.659671    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:38.158991    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:38.659705    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:39.158487    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:39.659720    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:40.159194    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:40.659457    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:41.159474    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:41.658974    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:42.159118    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:42.658293    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:43.159150    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:43.659378    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:44.159077    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:44.658440    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:45.159322    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:45.658859    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:46.159417    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:46.659103    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:47.159121    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:47.658942    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:48.159192    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:48.659210    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:49.159078    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:49.658777    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:50.158902    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:50.659032    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:51.157557    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:51.658986    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:52.158870    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:52.658869    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:53.158872    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:53.659054    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:54.159151    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:54.658516    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:55.158665    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:55.659058    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:56.158766    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:56.658648    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:57.158816    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:57.658502    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:58.158780    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:58.658463    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:59.157282    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:38:59.658651    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:00.158606    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:00.658133    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:01.157255    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:01.658550    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:02.158075    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:02.658641    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:03.158263    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:03.656306    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:04.158306    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:04.658153    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:05.158167    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:05.658063    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:06.158178    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:06.658181    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:07.158219    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:07.658227    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:08.158136    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:08.657986    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:09.158283    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:09.658166    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:10.158151    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:10.657780    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:11.158835    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:11.657955    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:12.157828    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:12.657637    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:13.157589    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:13.657670    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:14.157711    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:14.657707    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:15.157575    1617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 03:39:15.657688    1617 kapi.go:107] duration metric: took 2m32.003888458s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0311 03:39:15.662900    1617 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-597000 cluster.
	I0311 03:39:15.665883    1617 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0311 03:39:15.667588    1617 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0311 03:39:15.672888    1617 out.go:177] * Enabled addons: default-storageclass, yakd, cloud-spanner, ingress-dns, storage-provisioner, volumesnapshots, nvidia-device-plugin, inspektor-gadget, metrics-server, storage-provisioner-rancher, registry, ingress, csi-hostpath-driver, gcp-auth
	I0311 03:39:15.676900    1617 addons.go:505] duration metric: took 2m40.773810625s for enable addons: enabled=[default-storageclass yakd cloud-spanner ingress-dns storage-provisioner volumesnapshots nvidia-device-plugin inspektor-gadget metrics-server storage-provisioner-rancher registry ingress csi-hostpath-driver gcp-auth]
	I0311 03:39:15.676912    1617 start.go:245] waiting for cluster config update ...
	I0311 03:39:15.676921    1617 start.go:254] writing updated cluster config ...
	I0311 03:39:15.677697    1617 ssh_runner.go:195] Run: rm -f paused
	I0311 03:39:15.828657    1617 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0311 03:39:15.832881    1617 out.go:177] * Done! kubectl is now configured to use "addons-597000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 11 10:40:19 addons-597000 dockerd[1121]: time="2024-03-11T10:40:19.706718328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 10:40:19 addons-597000 dockerd[1121]: time="2024-03-11T10:40:19.706983393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 10:40:19 addons-597000 dockerd[1121]: time="2024-03-11T10:40:19.706994183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:40:19 addons-597000 dockerd[1121]: time="2024-03-11T10:40:19.707059626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:40:19 addons-597000 dockerd[1115]: time="2024-03-11T10:40:19.731541391Z" level=info msg="ignoring event" container=678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 10:40:19 addons-597000 dockerd[1121]: time="2024-03-11T10:40:19.731663489Z" level=info msg="shim disconnected" id=678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a namespace=moby
	Mar 11 10:40:19 addons-597000 dockerd[1121]: time="2024-03-11T10:40:19.731691483Z" level=warning msg="cleaning up after shim disconnected" id=678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a namespace=moby
	Mar 11 10:40:19 addons-597000 dockerd[1121]: time="2024-03-11T10:40:19.731695690Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 11 10:40:21 addons-597000 dockerd[1115]: time="2024-03-11T10:40:21.124682539Z" level=info msg="ignoring event" container=31ae029a8f0603a86927d84d5e785f91761ca95bd5ecb7d1a60e79def57749b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 10:40:21 addons-597000 dockerd[1121]: time="2024-03-11T10:40:21.124885242Z" level=info msg="shim disconnected" id=31ae029a8f0603a86927d84d5e785f91761ca95bd5ecb7d1a60e79def57749b9 namespace=moby
	Mar 11 10:40:21 addons-597000 dockerd[1121]: time="2024-03-11T10:40:21.124917859Z" level=warning msg="cleaning up after shim disconnected" id=31ae029a8f0603a86927d84d5e785f91761ca95bd5ecb7d1a60e79def57749b9 namespace=moby
	Mar 11 10:40:21 addons-597000 dockerd[1121]: time="2024-03-11T10:40:21.124922400Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.281696244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.281730403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.281738610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.281767770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:40:22 addons-597000 cri-dockerd[1012]: time="2024-03-11T10:40:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/578aac43b0c5229e20284b668d4a30baf7651daceb52177b21d9240f8ddc043b/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.381812504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.381887154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.381910357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.381965594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:40:22 addons-597000 dockerd[1115]: time="2024-03-11T10:40:22.406340422Z" level=info msg="ignoring event" container=9796d7c19bc42e8de70924a35eb680d8bcfce76659ad1eb325f2f660d5517133 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.406722502Z" level=info msg="shim disconnected" id=9796d7c19bc42e8de70924a35eb680d8bcfce76659ad1eb325f2f660d5517133 namespace=moby
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.406753537Z" level=warning msg="cleaning up after shim disconnected" id=9796d7c19bc42e8de70924a35eb680d8bcfce76659ad1eb325f2f660d5517133 namespace=moby
	Mar 11 10:40:22 addons-597000 dockerd[1121]: time="2024-03-11T10:40:22.406757952Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	9796d7c19bc42       fc9db2894f4e4                                                                                                    1 second ago         Exited              helper-pod                 0                   578aac43b0c52       helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24
	a7f72fc2044e2       dd1b12fcb6097                                                                                                    12 seconds ago       Exited              hello-world-app            1                   16d435e4fcc62       hello-world-app-5d77478584-vmwv2
	26b79fbb745fb       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                                    31 seconds ago       Running             nginx                      0                   77c0e41eff9ce       nginx
	cfeb1f9613ef2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32     About a minute ago   Running             gcp-auth                   0                   07baf149cc049       gcp-auth-5f6b4f85fd-zxl9p
	8bf45702968a7       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246           3 minutes ago        Running             local-path-provisioner     0                   009ae11c995e0       local-path-provisioner-78b46b4d5c-77g4g
	d148656d65cc6       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15   3 minutes ago        Running             cloud-spanner-emulator     0                   8dad50e7206bb       cloud-spanner-emulator-6548d5df46-jr2ft
	f02b2fd22fd97       nvcr.io/nvidia/k8s-device-plugin@sha256:50aa9517d771e3b0ffa7fded8f1e988dba680a7ff5efce162ce31d1b5ec043e2         3 minutes ago        Running             nvidia-device-plugin-ctr   0                   f54dd4601dbde       nvidia-device-plugin-daemonset-74ckq
	0a6efa5c99707       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                            3 minutes ago        Running             yakd                       0                   ff88190e617b8       yakd-dashboard-9947fc6bf-6949z
	45836e6e3474f       ba04bb24b9575                                                                                                    3 minutes ago        Running             storage-provisioner        0                   f95fd66880421       storage-provisioner
	b837202db2e65       97e04611ad434                                                                                                    3 minutes ago        Running             coredns                    0                   439d1e4f21350       coredns-5dd5756b68-s6d4v
	f02099d89e104       3ca3ca488cf13                                                                                                    3 minutes ago        Running             kube-proxy                 0                   7fecaf967d391       kube-proxy-w945h
	4aee32c2aa0f9       05c284c929889                                                                                                    4 minutes ago        Running             kube-scheduler             0                   d39a64fb3467b       kube-scheduler-addons-597000
	12426a70da824       9cdd6470f48c8                                                                                                    4 minutes ago        Running             etcd                       0                   364c7f0047e04       etcd-addons-597000
	5aa275c44e38a       9961cbceaf234                                                                                                    4 minutes ago        Running             kube-controller-manager    0                   86f89f20b0e6a       kube-controller-manager-addons-597000
	7907af2537b41       04b4c447bb9d4                                                                                                    4 minutes ago        Running             kube-apiserver             0                   26d3059224077       kube-apiserver-addons-597000
	
	
	==> coredns [b837202db2e6] <==
	[INFO] 10.244.0.20:39853 - 65324 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014954s
	[INFO] 10.244.0.20:55958 - 23340 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002487s
	[INFO] 10.244.0.20:55958 - 59762 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036699s
	[INFO] 10.244.0.20:39853 - 53585 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010956s
	[INFO] 10.244.0.20:55958 - 56044 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024953s
	[INFO] 10.244.0.20:39853 - 16739 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015746s
	[INFO] 10.244.0.20:55958 - 5811 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004928s
	[INFO] 10.244.0.20:39853 - 59730 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014164s
	[INFO] 10.244.0.20:39853 - 17821 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011414s
	[INFO] 10.244.0.20:55958 - 27409 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000033784s
	[INFO] 10.244.0.20:39853 - 40354 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000009914s
	[INFO] 10.244.0.20:60554 - 37199 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000039823s
	[INFO] 10.244.0.20:42087 - 23017 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013997s
	[INFO] 10.244.0.20:42087 - 55785 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000011414s
	[INFO] 10.244.0.20:60554 - 45101 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012289s
	[INFO] 10.244.0.20:42087 - 57951 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011956s
	[INFO] 10.244.0.20:60554 - 7362 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009706s
	[INFO] 10.244.0.20:60554 - 48382 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011372s
	[INFO] 10.244.0.20:42087 - 29346 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046989s
	[INFO] 10.244.0.20:60554 - 25343 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010498s
	[INFO] 10.244.0.20:60554 - 10411 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013539s
	[INFO] 10.244.0.20:42087 - 44274 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013663s
	[INFO] 10.244.0.20:60554 - 5550 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000024369s
	[INFO] 10.244.0.20:42087 - 42756 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012456s
	[INFO] 10.244.0.20:42087 - 43556 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000011789s
	
	
	==> describe nodes <==
	Name:               addons-597000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-597000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=addons-597000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T03_36_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-597000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 10:36:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-597000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 10:40:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 10:39:56 +0000   Mon, 11 Mar 2024 10:36:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 10:39:56 +0000   Mon, 11 Mar 2024 10:36:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 10:39:56 +0000   Mon, 11 Mar 2024 10:36:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 10:39:56 +0000   Mon, 11 Mar 2024 10:36:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-597000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 55d721db211f42a5bbb26aaba12ac5d1
	  System UUID:                55d721db211f42a5bbb26aaba12ac5d1
	  Boot ID:                    792f89f1-fc5d-43e3-a666-5fff8f07b567
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-jr2ft                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  default                     hello-world-app-5d77478584-vmwv2                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     nginx                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  gcp-auth                    gcp-auth-5f6b4f85fd-zxl9p                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 coredns-5dd5756b68-s6d4v                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m48s
	  kube-system                 etcd-addons-597000                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-apiserver-addons-597000                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-controller-manager-addons-597000                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-proxy-w945h                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-scheduler-addons-597000                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 nvidia-device-plugin-daemonset-74ckq                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  local-path-storage          helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  local-path-storage          local-path-provisioner-78b46b4d5c-77g4g                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-6949z                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m48s  kube-proxy       
	  Normal  Starting                 4m2s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m1s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m1s   kubelet          Node addons-597000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s   kubelet          Node addons-597000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s   kubelet          Node addons-597000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m59s  kubelet          Node addons-597000 status is now: NodeReady
	  Normal  RegisteredNode           3m49s  node-controller  Node addons-597000 event: Registered Node addons-597000 in Controller
	
	
	==> dmesg <==
	[  +0.043315] kauditd_printk_skb: 64 callbacks suppressed
	[ +13.344389] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.174185] systemd-fstab-generator[3147]: Ignoring "noauto" option for root device
	[  +5.022049] kauditd_printk_skb: 229 callbacks suppressed
	[  +5.596158] kauditd_printk_skb: 61 callbacks suppressed
	[ +10.755607] kauditd_printk_skb: 5 callbacks suppressed
	[Mar11 10:37] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.548300] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.527808] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.535681] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.491857] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.070023] kauditd_printk_skb: 23 callbacks suppressed
	[Mar11 10:38] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.585499] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.354279] kauditd_printk_skb: 14 callbacks suppressed
	[ +15.759852] kauditd_printk_skb: 4 callbacks suppressed
	[Mar11 10:39] kauditd_printk_skb: 8 callbacks suppressed
	[ +29.940864] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.304777] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.242513] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.873697] kauditd_printk_skb: 11 callbacks suppressed
	[Mar11 10:40] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.914137] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.096252] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.085473] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [12426a70da82] <==
	{"level":"info","ts":"2024-03-11T10:36:18.689884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-03-11T10:36:18.690536Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T10:36:18.690843Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-597000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T10:36:18.690877Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T10:36:18.690956Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T10:36:18.690995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T10:36:18.691017Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T10:36:18.691493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T10:36:18.691564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T10:36:18.691931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-03-11T10:36:18.70676Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T10:36:18.706794Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T10:36:41.267236Z","caller":"traceutil/trace.go:171","msg":"trace[1664954092] transaction","detail":"{read_only:false; response_revision:762; number_of_response:1; }","duration":"184.444878ms","start":"2024-03-11T10:36:41.082782Z","end":"2024-03-11T10:36:41.267227Z","steps":["trace[1664954092] 'process raft request'  (duration: 183.95432ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T10:37:12.462901Z","caller":"traceutil/trace.go:171","msg":"trace[1088574845] linearizableReadLoop","detail":"{readStateIndex:930; appliedIndex:929; }","duration":"174.24752ms","start":"2024-03-11T10:37:12.288644Z","end":"2024-03-11T10:37:12.462891Z","steps":["trace[1088574845] 'read index received'  (duration: 174.160454ms)","trace[1088574845] 'applied index is now lower than readState.Index'  (duration: 86.733µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-11T10:37:12.46295Z","caller":"traceutil/trace.go:171","msg":"trace[1665260113] transaction","detail":"{read_only:false; response_revision:905; number_of_response:1; }","duration":"180.673613ms","start":"2024-03-11T10:37:12.282273Z","end":"2024-03-11T10:37:12.462947Z","steps":["trace[1665260113] 'process raft request'  (duration: 180.543409ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T10:37:12.463196Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.555644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78171"}
	{"level":"info","ts":"2024-03-11T10:37:12.463215Z","caller":"traceutil/trace.go:171","msg":"trace[1053454931] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:905; }","duration":"174.583709ms","start":"2024-03-11T10:37:12.288628Z","end":"2024-03-11T10:37:12.463212Z","steps":["trace[1053454931] 'agreement among raft nodes before linearized reading'  (duration: 174.497642ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T10:37:12.463375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.718867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78171"}
	{"level":"warn","ts":"2024-03-11T10:37:12.463511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.199853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10525"}
	{"level":"info","ts":"2024-03-11T10:37:12.46352Z","caller":"traceutil/trace.go:171","msg":"trace[1853970580] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:905; }","duration":"105.210179ms","start":"2024-03-11T10:37:12.358308Z","end":"2024-03-11T10:37:12.463518Z","steps":["trace[1853970580] 'agreement among raft nodes before linearized reading'  (duration: 105.188485ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T10:37:12.463383Z","caller":"traceutil/trace.go:171","msg":"trace[1117500875] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:905; }","duration":"174.72682ms","start":"2024-03-11T10:37:12.288654Z","end":"2024-03-11T10:37:12.463381Z","steps":["trace[1117500875] 'agreement among raft nodes before linearized reading'  (duration: 174.691344ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T10:38:04.084924Z","caller":"traceutil/trace.go:171","msg":"trace[1962080385] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"159.721265ms","start":"2024-03-11T10:38:03.925194Z","end":"2024-03-11T10:38:04.084915Z","steps":["trace[1962080385] 'process raft request'  (duration: 159.674779ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T10:39:39.245937Z","caller":"traceutil/trace.go:171","msg":"trace[75014563] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1379; }","duration":"111.250773ms","start":"2024-03-11T10:39:39.134675Z","end":"2024-03-11T10:39:39.245926Z","steps":["trace[75014563] 'process raft request'  (duration: 111.117222ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T10:39:40.631088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.364103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.105.2\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-03-11T10:39:40.631118Z","caller":"traceutil/trace.go:171","msg":"trace[2017527517] range","detail":"{range_begin:/registry/masterleases/192.168.105.2; range_end:; response_count:1; response_revision:1401; }","duration":"157.421339ms","start":"2024-03-11T10:39:40.47369Z","end":"2024-03-11T10:39:40.631112Z","steps":["trace[2017527517] 'range keys from in-memory index tree'  (duration: 157.320363ms)"],"step_count":1}
	
	
	==> gcp-auth [cfeb1f9613ef] <==
	2024/03/11 10:39:14 GCP Auth Webhook started!
	2024/03/11 10:39:26 Ready to marshal response ...
	2024/03/11 10:39:26 Ready to write response ...
	2024/03/11 10:39:35 Ready to marshal response ...
	2024/03/11 10:39:35 Ready to write response ...
	2024/03/11 10:39:49 Ready to marshal response ...
	2024/03/11 10:39:49 Ready to write response ...
	2024/03/11 10:39:57 Ready to marshal response ...
	2024/03/11 10:39:57 Ready to write response ...
	2024/03/11 10:40:00 Ready to marshal response ...
	2024/03/11 10:40:00 Ready to write response ...
	2024/03/11 10:40:12 Ready to marshal response ...
	2024/03/11 10:40:12 Ready to write response ...
	2024/03/11 10:40:12 Ready to marshal response ...
	2024/03/11 10:40:12 Ready to write response ...
	2024/03/11 10:40:21 Ready to marshal response ...
	2024/03/11 10:40:21 Ready to write response ...
	
	
	==> kernel <==
	 10:40:23 up 4 min,  0 users,  load average: 0.61, 0.55, 0.27
	Linux addons-597000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7907af2537b4] <==
	I0311 10:39:49.890679       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.205.72"}
	I0311 10:40:00.113514       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.184.195"}
	I0311 10:40:12.418038       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 10:40:12.418055       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 10:40:12.427297       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 10:40:12.427323       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 10:40:12.429223       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 10:40:12.429238       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 10:40:12.443716       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 10:40:12.443735       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 10:40:12.473147       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 10:40:12.473189       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 10:40:12.483395       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 10:40:12.483459       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 10:40:12.493076       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 10:40:12.493305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 10:40:12.494306       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 10:40:12.494367       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0311 10:40:13.484031       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0311 10:40:13.494000       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0311 10:40:13.502708       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0311 10:40:19.976250       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0311 10:40:22.949370       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0311 10:40:22.950486       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0311 10:40:22.951685       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [5aa275c44e38] <==
	I0311 10:40:14.002352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.158µs"
	W0311 10:40:14.297151       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:14.297171       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 10:40:14.610156       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:14.610179       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 10:40:14.822257       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:14.822275       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 10:40:16.234836       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0311 10:40:16.235130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="2.208µs"
	I0311 10:40:16.237343       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0311 10:40:16.268416       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:16.268438       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 10:40:16.713383       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:16.713405       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 10:40:16.930520       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:16.930540       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 10:40:19.340231       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:19.340250       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 10:40:20.560232       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:20.560252       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 10:40:20.701969       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:20.702000       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 10:40:22.212446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="5.707µs"
	W0311 10:40:22.340735       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 10:40:22.340751       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [f02099d89e10] <==
	I0311 10:36:35.322072       1 server_others.go:69] "Using iptables proxy"
	I0311 10:36:35.329777       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0311 10:36:35.366486       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 10:36:35.366498       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 10:36:35.368380       1 server_others.go:152] "Using iptables Proxier"
	I0311 10:36:35.368438       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 10:36:35.368553       1 server.go:846] "Version info" version="v1.28.4"
	I0311 10:36:35.368675       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 10:36:35.369188       1 config.go:188] "Starting service config controller"
	I0311 10:36:35.369232       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 10:36:35.369262       1 config.go:97] "Starting endpoint slice config controller"
	I0311 10:36:35.369292       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 10:36:35.369584       1 config.go:315] "Starting node config controller"
	I0311 10:36:35.369610       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 10:36:35.469658       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 10:36:35.469716       1 shared_informer.go:318] Caches are synced for service config
	I0311 10:36:35.469875       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4aee32c2aa0f] <==
	W0311 10:36:19.328498       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 10:36:19.328711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 10:36:19.328722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 10:36:19.328726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 10:36:19.328552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 10:36:19.328731       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 10:36:19.328771       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 10:36:19.328788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 10:36:19.328918       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 10:36:19.328927       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 10:36:19.329040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 10:36:19.329048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 10:36:20.145295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 10:36:20.145315       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 10:36:20.180917       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 10:36:20.180930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 10:36:20.200578       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 10:36:20.200588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 10:36:20.200623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 10:36:20.200627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 10:36:20.215813       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 10:36:20.215823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 10:36:20.272671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 10:36:20.272713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0311 10:36:20.617998       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 10:40:21 addons-597000 kubelet[2407]: I0311 10:40:21.412575    2407 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/df46f00e-7391-4d35-a31e-fdcc823f00b8-gcp-creds\") on node \"addons-597000\" DevicePath \"\""
	Mar 11 10:40:21 addons-597000 kubelet[2407]: I0311 10:40:21.412582    2407 reconciler_common.go:300] "Volume detached for volume \"pvc-3949e277-7901-4317-bdda-9cee2a039c24\" (UniqueName: \"kubernetes.io/host-path/df46f00e-7391-4d35-a31e-fdcc823f00b8-pvc-3949e277-7901-4317-bdda-9cee2a039c24\") on node \"addons-597000\" DevicePath \"\""
	Mar 11 10:40:21 addons-597000 kubelet[2407]: I0311 10:40:21.945997    2407 topology_manager.go:215] "Topology Admit Handler" podUID="5a282778-dd55-4c3a-a68f-4386b63fcfaa" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24"
	Mar 11 10:40:21 addons-597000 kubelet[2407]: E0311 10:40:21.946034    2407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df46f00e-7391-4d35-a31e-fdcc823f00b8" containerName="busybox"
	Mar 11 10:40:21 addons-597000 kubelet[2407]: E0311 10:40:21.946040    2407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc9d9a7c-e77b-4e8b-bae3-38a181a66f72" containerName="minikube-ingress-dns"
	Mar 11 10:40:21 addons-597000 kubelet[2407]: I0311 10:40:21.946057    2407 memory_manager.go:346] "RemoveStaleState removing state" podUID="cc9d9a7c-e77b-4e8b-bae3-38a181a66f72" containerName="minikube-ingress-dns"
	Mar 11 10:40:21 addons-597000 kubelet[2407]: I0311 10:40:21.946060    2407 memory_manager.go:346] "RemoveStaleState removing state" podUID="df46f00e-7391-4d35-a31e-fdcc823f00b8" containerName="busybox"
	Mar 11 10:40:21 addons-597000 kubelet[2407]: I0311 10:40:21.946063    2407 memory_manager.go:346] "RemoveStaleState removing state" podUID="cc9d9a7c-e77b-4e8b-bae3-38a181a66f72" containerName="minikube-ingress-dns"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.016364    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5a282778-dd55-4c3a-a68f-4386b63fcfaa-data\") pod \"helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24\" (UID: \"5a282778-dd55-4c3a-a68f-4386b63fcfaa\") " pod="local-path-storage/helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.016445    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hphv9\" (UniqueName: \"kubernetes.io/projected/5a282778-dd55-4c3a-a68f-4386b63fcfaa-kube-api-access-hphv9\") pod \"helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24\" (UID: \"5a282778-dd55-4c3a-a68f-4386b63fcfaa\") " pod="local-path-storage/helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.016471    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a282778-dd55-4c3a-a68f-4386b63fcfaa-gcp-creds\") pod \"helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24\" (UID: \"5a282778-dd55-4c3a-a68f-4386b63fcfaa\") " pod="local-path-storage/helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.016499    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5a282778-dd55-4c3a-a68f-4386b63fcfaa-script\") pod \"helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24\" (UID: \"5a282778-dd55-4c3a-a68f-4386b63fcfaa\") " pod="local-path-storage/helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.017718    2407 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="df46f00e-7391-4d35-a31e-fdcc823f00b8" path="/var/lib/kubelet/pods/df46f00e-7391-4d35-a31e-fdcc823f00b8/volumes"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: E0311 10:40:22.018780    2407 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 10:40:22 addons-597000 kubelet[2407]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 10:40:22 addons-597000 kubelet[2407]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 10:40:22 addons-597000 kubelet[2407]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 10:40:22 addons-597000 kubelet[2407]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.065983    2407 scope.go:117] "RemoveContainer" containerID="39e308555ea0c53e6708c25cfbdc2ade131979b9e7b8b4caca327f72812ce346"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.076702    2407 scope.go:117] "RemoveContainer" containerID="d99e3d02a4e7c15cd468d5cd9303b841a765264a5875628dc5fd0d68c8fee61c"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.076921    2407 scope.go:117] "RemoveContainer" containerID="678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.082243    2407 scope.go:117] "RemoveContainer" containerID="678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: E0311 10:40:22.085297    2407 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to remove container \"678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a\": Error response from daemon: removal of container 678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a is already in progress" containerID="678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: E0311 10:40:22.085318    2407 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to remove container \"678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a\": Error response from daemon: removal of container 678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a is already in progress" containerID="678d1c11053a19996548e9fe47d7874724e91f062215d5eff9b2145c3df5ba8a"
	Mar 11 10:40:22 addons-597000 kubelet[2407]: I0311 10:40:22.085324    2407 scope.go:117] "RemoveContainer" containerID="cdaeb9f556b44f635fd01b38fff4b6bf42f10ace7ea4d5d40bce6438653c5149"
	
	
	==> storage-provisioner [45836e6e3474] <==
	I0311 10:36:40.147153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 10:36:40.165590       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 10:36:40.165610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 10:36:40.204724       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 10:36:40.204819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-597000_edaa45aa-63e7-46c8-971b-f12faf4c20de!
	I0311 10:36:40.205081       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d23c7b0-e5a6-410b-8b77-ac733a22e553", APIVersion:"v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-597000_edaa45aa-63e7-46c8-971b-f12faf4c20de became leader
	I0311 10:36:40.305518       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-597000_edaa45aa-63e7-46c8-971b-f12faf4c20de!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-597000 -n addons-597000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-597000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-597000 describe pod helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-597000 describe pod helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24: exit status 1 (40.512375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-597000 describe pod helper-pod-delete-pvc-3949e277-7901-4317-bdda-9cee2a039c24: exit status 1
--- FAIL: TestAddons/parallel/Ingress (34.53s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-919000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-919000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.793557417s)

                                                
                                                
-- stdout --
	* [cert-options-919000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-919000" primary control-plane node in "cert-options-919000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-919000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-919000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-919000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-919000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-919000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (88.809917ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-919000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-919000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-919000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-919000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-919000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-919000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.212125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-919000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-919000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-919000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-919000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-919000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-11 04:26:50.549735 -0700 PDT m=+3148.067912042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-919000 -n cert-options-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-919000 -n cert-options-919000: exit status 7 (32.378916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-919000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-919000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-919000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-049000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-049000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.753883875s)

                                                
                                                
-- stdout --
	* [cert-expiration-049000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-049000" primary control-plane node in "cert-expiration-049000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-049000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-049000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-049000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-049000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-049000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.243691875s)

                                                
                                                
-- stdout --
	* [cert-expiration-049000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-049000" primary control-plane node in "cert-expiration-049000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-049000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-049000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-049000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-049000" primary control-plane node in "cert-expiration-049000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-049000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-11 04:29:40.50513 -0700 PDT m=+3318.027063959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-049000 -n cert-expiration-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-049000 -n cert-expiration-049000: exit status 7 (67.831459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-049000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-049000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-049000
--- FAIL: TestCertExpiration (195.18s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-623000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-623000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.847703833s)

                                                
                                                
-- stdout --
	* [docker-flags-623000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-623000" primary control-plane node in "docker-flags-623000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-623000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:26:30.511866    4612 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:26:30.511995    4612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:26:30.511999    4612 out.go:304] Setting ErrFile to fd 2...
	I0311 04:26:30.512001    4612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:26:30.512139    4612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:26:30.513211    4612 out.go:298] Setting JSON to false
	I0311 04:26:30.529459    4612 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3362,"bootTime":1710153028,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:26:30.529518    4612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:26:30.535446    4612 out.go:177] * [docker-flags-623000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:26:30.546417    4612 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:26:30.542575    4612 notify.go:220] Checking for updates...
	I0311 04:26:30.552519    4612 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:26:30.554005    4612 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:26:30.557568    4612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:26:30.560568    4612 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:26:30.563557    4612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:26:30.566890    4612 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:26:30.566958    4612 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:26:30.567003    4612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:26:30.571514    4612 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:26:30.578554    4612 start.go:297] selected driver: qemu2
	I0311 04:26:30.578560    4612 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:26:30.578567    4612 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:26:30.580809    4612 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:26:30.584528    4612 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:26:30.587635    4612 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0311 04:26:30.587675    4612 cni.go:84] Creating CNI manager for ""
	I0311 04:26:30.587682    4612 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:26:30.587686    4612 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:26:30.587714    4612 start.go:340] cluster config:
	{Name:docker-flags-623000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:26:30.592323    4612 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:26:30.599563    4612 out.go:177] * Starting "docker-flags-623000" primary control-plane node in "docker-flags-623000" cluster
	I0311 04:26:30.602494    4612 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:26:30.602508    4612 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:26:30.602519    4612 cache.go:56] Caching tarball of preloaded images
	I0311 04:26:30.602582    4612 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:26:30.602589    4612 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:26:30.602667    4612 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/docker-flags-623000/config.json ...
	I0311 04:26:30.602685    4612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/docker-flags-623000/config.json: {Name:mk2cea6e749470d9156324c7a73bff4dce71275f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:26:30.602884    4612 start.go:360] acquireMachinesLock for docker-flags-623000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:26:30.602915    4612 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "docker-flags-623000"
	I0311 04:26:30.602925    4612 start.go:93] Provisioning new machine with config: &{Name:docker-flags-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:26:30.602953    4612 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:26:30.610392    4612 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 04:26:30.626992    4612 start.go:159] libmachine.API.Create for "docker-flags-623000" (driver="qemu2")
	I0311 04:26:30.627018    4612 client.go:168] LocalClient.Create starting
	I0311 04:26:30.627076    4612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:26:30.627105    4612 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:30.627114    4612 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:30.627155    4612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:26:30.627175    4612 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:30.627182    4612 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:30.627510    4612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:26:30.768148    4612 main.go:141] libmachine: Creating SSH key...
	I0311 04:26:30.881845    4612 main.go:141] libmachine: Creating Disk image...
	I0311 04:26:30.881851    4612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:26:30.882039    4612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2
	I0311 04:26:30.894507    4612 main.go:141] libmachine: STDOUT: 
	I0311 04:26:30.894539    4612 main.go:141] libmachine: STDERR: 
	I0311 04:26:30.894584    4612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2 +20000M
	I0311 04:26:30.905180    4612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:26:30.905197    4612 main.go:141] libmachine: STDERR: 
	I0311 04:26:30.905213    4612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2
	I0311 04:26:30.905217    4612 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:26:30.905246    4612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:fe:a3:2e:6d:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2
	I0311 04:26:30.906909    4612 main.go:141] libmachine: STDOUT: 
	I0311 04:26:30.906927    4612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:26:30.906943    4612 client.go:171] duration metric: took 279.924375ms to LocalClient.Create
	I0311 04:26:32.909154    4612 start.go:128] duration metric: took 2.306232708s to createHost
	I0311 04:26:32.909209    4612 start.go:83] releasing machines lock for "docker-flags-623000", held for 2.306335167s
	W0311 04:26:32.909261    4612 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:32.922190    4612 out.go:177] * Deleting "docker-flags-623000" in qemu2 ...
	W0311 04:26:32.943264    4612 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:32.943284    4612 start.go:728] Will try again in 5 seconds ...
	I0311 04:26:37.945303    4612 start.go:360] acquireMachinesLock for docker-flags-623000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:26:37.945770    4612 start.go:364] duration metric: took 331.917µs to acquireMachinesLock for "docker-flags-623000"
	I0311 04:26:37.945920    4612 start.go:93] Provisioning new machine with config: &{Name:docker-flags-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:26:37.946195    4612 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:26:37.955779    4612 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 04:26:38.007256    4612 start.go:159] libmachine.API.Create for "docker-flags-623000" (driver="qemu2")
	I0311 04:26:38.007322    4612 client.go:168] LocalClient.Create starting
	I0311 04:26:38.007429    4612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:26:38.007513    4612 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:38.007534    4612 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:38.007614    4612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:26:38.007647    4612 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:38.007666    4612 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:38.008232    4612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:26:38.160606    4612 main.go:141] libmachine: Creating SSH key...
	I0311 04:26:38.256572    4612 main.go:141] libmachine: Creating Disk image...
	I0311 04:26:38.256577    4612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:26:38.256766    4612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2
	I0311 04:26:38.269211    4612 main.go:141] libmachine: STDOUT: 
	I0311 04:26:38.269232    4612 main.go:141] libmachine: STDERR: 
	I0311 04:26:38.269301    4612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2 +20000M
	I0311 04:26:38.279846    4612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:26:38.279864    4612 main.go:141] libmachine: STDERR: 
	I0311 04:26:38.279876    4612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2
	I0311 04:26:38.279890    4612 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:26:38.279930    4612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:32:4a:43:c8:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/docker-flags-623000/disk.qcow2
	I0311 04:26:38.281680    4612 main.go:141] libmachine: STDOUT: 
	I0311 04:26:38.281697    4612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:26:38.281709    4612 client.go:171] duration metric: took 274.385875ms to LocalClient.Create
	I0311 04:26:40.283848    4612 start.go:128] duration metric: took 2.33767375s to createHost
	I0311 04:26:40.283967    4612 start.go:83] releasing machines lock for "docker-flags-623000", held for 2.338225125s
	W0311 04:26:40.284280    4612 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:40.297027    4612 out.go:177] 
	W0311 04:26:40.301095    4612 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:26:40.301122    4612 out.go:239] * 
	* 
	W0311 04:26:40.303727    4612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:26:40.314910    4612 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-623000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-623000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-623000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.904417ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-623000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-623000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-623000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-623000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-623000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-623000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-623000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-623000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-623000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.615666ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-623000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-623000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-623000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-623000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-623000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-623000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-11 04:26:40.457869 -0700 PDT m=+3137.975823334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-623000 -n docker-flags-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-623000 -n docker-flags-623000: exit status 7 (32.027084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-623000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-623000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-623000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-846000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-846000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.852789791s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-846000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-846000" primary control-plane node in "force-systemd-flag-846000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-846000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:26:05.247431    4475 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:26:05.247572    4475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:26:05.247575    4475 out.go:304] Setting ErrFile to fd 2...
	I0311 04:26:05.247578    4475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:26:05.247716    4475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:26:05.248770    4475 out.go:298] Setting JSON to false
	I0311 04:26:05.264924    4475 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3337,"bootTime":1710153028,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:26:05.264986    4475 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:26:05.269668    4475 out.go:177] * [force-systemd-flag-846000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:26:05.275499    4475 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:26:05.278545    4475 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:26:05.275575    4475 notify.go:220] Checking for updates...
	I0311 04:26:05.281628    4475 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:26:05.284584    4475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:26:05.287592    4475 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:26:05.290587    4475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:26:05.292378    4475 config.go:182] Loaded profile config "NoKubernetes-886000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0311 04:26:05.292449    4475 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:26:05.292494    4475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:26:05.296556    4475 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:26:05.303407    4475 start.go:297] selected driver: qemu2
	I0311 04:26:05.303414    4475 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:26:05.303427    4475 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:26:05.305537    4475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:26:05.308521    4475 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:26:05.311691    4475 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 04:26:05.311746    4475 cni.go:84] Creating CNI manager for ""
	I0311 04:26:05.311754    4475 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:26:05.311758    4475 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:26:05.311792    4475 start.go:340] cluster config:
	{Name:force-systemd-flag-846000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:26:05.316067    4475 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:26:05.323540    4475 out.go:177] * Starting "force-systemd-flag-846000" primary control-plane node in "force-systemd-flag-846000" cluster
	I0311 04:26:05.327639    4475 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:26:05.327653    4475 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:26:05.327664    4475 cache.go:56] Caching tarball of preloaded images
	I0311 04:26:05.327725    4475 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:26:05.327732    4475 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:26:05.327797    4475 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/force-systemd-flag-846000/config.json ...
	I0311 04:26:05.327808    4475 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/force-systemd-flag-846000/config.json: {Name:mk2525716425d4d280b6c45c5ef6013e57da9d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:26:05.328020    4475 start.go:360] acquireMachinesLock for force-systemd-flag-846000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:26:05.328054    4475 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "force-systemd-flag-846000"
	I0311 04:26:05.328065    4475 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-846000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:26:05.328097    4475 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:26:05.335608    4475 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 04:26:05.352503    4475 start.go:159] libmachine.API.Create for "force-systemd-flag-846000" (driver="qemu2")
	I0311 04:26:05.352533    4475 client.go:168] LocalClient.Create starting
	I0311 04:26:05.352587    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:26:05.352616    4475 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:05.352628    4475 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:05.352675    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:26:05.352698    4475 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:05.352707    4475 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:05.353086    4475 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:26:05.493862    4475 main.go:141] libmachine: Creating SSH key...
	I0311 04:26:05.579703    4475 main.go:141] libmachine: Creating Disk image...
	I0311 04:26:05.579708    4475 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:26:05.579878    4475 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2
	I0311 04:26:05.592172    4475 main.go:141] libmachine: STDOUT: 
	I0311 04:26:05.592189    4475 main.go:141] libmachine: STDERR: 
	I0311 04:26:05.592237    4475 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2 +20000M
	I0311 04:26:05.603309    4475 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:26:05.603326    4475 main.go:141] libmachine: STDERR: 
	I0311 04:26:05.603343    4475 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2
	I0311 04:26:05.603346    4475 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:26:05.603372    4475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:18:c3:91:df:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2
	I0311 04:26:05.605152    4475 main.go:141] libmachine: STDOUT: 
	I0311 04:26:05.605177    4475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:26:05.605198    4475 client.go:171] duration metric: took 252.665416ms to LocalClient.Create
	I0311 04:26:07.607372    4475 start.go:128] duration metric: took 2.279292709s to createHost
	I0311 04:26:07.607464    4475 start.go:83] releasing machines lock for "force-systemd-flag-846000", held for 2.279450333s
	W0311 04:26:07.607563    4475 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:07.624046    4475 out.go:177] * Deleting "force-systemd-flag-846000" in qemu2 ...
	W0311 04:26:07.681567    4475 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:07.681617    4475 start.go:728] Will try again in 5 seconds ...
	I0311 04:26:12.683744    4475 start.go:360] acquireMachinesLock for force-systemd-flag-846000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:26:12.684119    4475 start.go:364] duration metric: took 287.583µs to acquireMachinesLock for "force-systemd-flag-846000"
	I0311 04:26:12.684250    4475 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-846000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:26:12.684532    4475 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:26:12.692803    4475 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 04:26:12.741194    4475 start.go:159] libmachine.API.Create for "force-systemd-flag-846000" (driver="qemu2")
	I0311 04:26:12.741267    4475 client.go:168] LocalClient.Create starting
	I0311 04:26:12.741405    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:26:12.741472    4475 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:12.741502    4475 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:12.741575    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:26:12.741624    4475 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:12.741645    4475 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:12.742154    4475 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:26:12.886890    4475 main.go:141] libmachine: Creating SSH key...
	I0311 04:26:12.995270    4475 main.go:141] libmachine: Creating Disk image...
	I0311 04:26:12.995276    4475 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:26:12.995448    4475 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2
	I0311 04:26:13.007865    4475 main.go:141] libmachine: STDOUT: 
	I0311 04:26:13.007882    4475 main.go:141] libmachine: STDERR: 
	I0311 04:26:13.007939    4475 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2 +20000M
	I0311 04:26:13.018484    4475 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:26:13.018504    4475 main.go:141] libmachine: STDERR: 
	I0311 04:26:13.018516    4475 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2
	I0311 04:26:13.018522    4475 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:26:13.018558    4475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c3:e0:6c:b8:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-flag-846000/disk.qcow2
	I0311 04:26:13.020315    4475 main.go:141] libmachine: STDOUT: 
	I0311 04:26:13.020332    4475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:26:13.020345    4475 client.go:171] duration metric: took 279.07975ms to LocalClient.Create
	I0311 04:26:15.022490    4475 start.go:128] duration metric: took 2.337977333s to createHost
	I0311 04:26:15.022581    4475 start.go:83] releasing machines lock for "force-systemd-flag-846000", held for 2.338489458s
	W0311 04:26:15.022981    4475 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:15.036697    4475 out.go:177] 
	W0311 04:26:15.040763    4475 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:26:15.040790    4475 out.go:239] * 
	* 
	W0311 04:26:15.043510    4475 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:26:15.054637    4475 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-846000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-846000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-846000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.818625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-846000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-846000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-11 04:26:15.149268 -0700 PDT m=+3112.666663001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-846000 -n force-systemd-flag-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-846000 -n force-systemd-flag-846000: exit status 7 (33.619208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-846000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-846000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-846000
--- FAIL: TestForceSystemdFlag (10.07s)

                                                
                                    
x
+
TestForceSystemdEnv (10.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-255000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-255000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.660891958s)

                                                
                                                
-- stdout --
	* [force-systemd-env-255000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-255000" primary control-plane node in "force-systemd-env-255000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-255000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:26:19.635469    4553 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:26:19.635650    4553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:26:19.635654    4553 out.go:304] Setting ErrFile to fd 2...
	I0311 04:26:19.635656    4553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:26:19.635796    4553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:26:19.636836    4553 out.go:298] Setting JSON to false
	I0311 04:26:19.653050    4553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3351,"bootTime":1710153028,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:26:19.653105    4553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:26:19.661116    4553 out.go:177] * [force-systemd-env-255000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:26:19.669037    4553 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:26:19.669085    4553 notify.go:220] Checking for updates...
	I0311 04:26:19.679600    4553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:26:19.683018    4553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:26:19.687064    4553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:26:19.690063    4553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:26:19.693062    4553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0311 04:26:19.696484    4553 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:26:19.696533    4553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:26:19.701004    4553 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:26:19.707975    4553 start.go:297] selected driver: qemu2
	I0311 04:26:19.707983    4553 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:26:19.707989    4553 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:26:19.710458    4553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:26:19.714028    4553 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:26:19.717091    4553 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 04:26:19.717137    4553 cni.go:84] Creating CNI manager for ""
	I0311 04:26:19.717146    4553 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:26:19.717151    4553 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:26:19.717193    4553 start.go:340] cluster config:
	{Name:force-systemd-env-255000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:26:19.722197    4553 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:26:19.728869    4553 out.go:177] * Starting "force-systemd-env-255000" primary control-plane node in "force-systemd-env-255000" cluster
	I0311 04:26:19.732974    4553 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:26:19.732992    4553 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:26:19.733005    4553 cache.go:56] Caching tarball of preloaded images
	I0311 04:26:19.733060    4553 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:26:19.733067    4553 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:26:19.733140    4553 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/force-systemd-env-255000/config.json ...
	I0311 04:26:19.733152    4553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/force-systemd-env-255000/config.json: {Name:mkc41dae2137bace1832d0281102779310367af1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:26:19.733367    4553 start.go:360] acquireMachinesLock for force-systemd-env-255000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:26:19.733400    4553 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "force-systemd-env-255000"
	I0311 04:26:19.733412    4553 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-255000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:26:19.733445    4553 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:26:19.740004    4553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 04:26:19.757681    4553 start.go:159] libmachine.API.Create for "force-systemd-env-255000" (driver="qemu2")
	I0311 04:26:19.757708    4553 client.go:168] LocalClient.Create starting
	I0311 04:26:19.757766    4553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:26:19.757794    4553 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:19.757804    4553 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:19.757849    4553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:26:19.757871    4553 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:19.757878    4553 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:19.758307    4553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:26:19.898757    4553 main.go:141] libmachine: Creating SSH key...
	I0311 04:26:20.091446    4553 main.go:141] libmachine: Creating Disk image...
	I0311 04:26:20.091456    4553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:26:20.091693    4553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2
	I0311 04:26:20.104210    4553 main.go:141] libmachine: STDOUT: 
	I0311 04:26:20.104234    4553 main.go:141] libmachine: STDERR: 
	I0311 04:26:20.104305    4553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2 +20000M
	I0311 04:26:20.114958    4553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:26:20.114973    4553 main.go:141] libmachine: STDERR: 
	I0311 04:26:20.114988    4553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2
	I0311 04:26:20.114992    4553 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:26:20.115020    4553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:fe:bb:9a:1a:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2
	I0311 04:26:20.116716    4553 main.go:141] libmachine: STDOUT: 
	I0311 04:26:20.116730    4553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:26:20.116758    4553 client.go:171] duration metric: took 359.040959ms to LocalClient.Create
	I0311 04:26:22.118747    4553 start.go:128] duration metric: took 2.385346542s to createHost
	I0311 04:26:22.118781    4553 start.go:83] releasing machines lock for "force-systemd-env-255000", held for 2.38542925s
	W0311 04:26:22.118795    4553 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:22.128336    4553 out.go:177] * Deleting "force-systemd-env-255000" in qemu2 ...
	W0311 04:26:22.137151    4553 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:22.137164    4553 start.go:728] Will try again in 5 seconds ...
	I0311 04:26:27.138018    4553 start.go:360] acquireMachinesLock for force-systemd-env-255000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:26:27.868985    4553 start.go:364] duration metric: took 730.83875ms to acquireMachinesLock for "force-systemd-env-255000"
	I0311 04:26:27.869119    4553 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-255000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-255000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:26:27.869343    4553 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:26:27.877854    4553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 04:26:27.925594    4553 start.go:159] libmachine.API.Create for "force-systemd-env-255000" (driver="qemu2")
	I0311 04:26:27.925636    4553 client.go:168] LocalClient.Create starting
	I0311 04:26:27.925777    4553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:26:27.925844    4553 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:27.925864    4553 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:27.925923    4553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:26:27.925964    4553 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:27.925980    4553 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:27.926498    4553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:26:28.072183    4553 main.go:141] libmachine: Creating SSH key...
	I0311 04:26:28.183784    4553 main.go:141] libmachine: Creating Disk image...
	I0311 04:26:28.183789    4553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:26:28.183970    4553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2
	I0311 04:26:28.196369    4553 main.go:141] libmachine: STDOUT: 
	I0311 04:26:28.196384    4553 main.go:141] libmachine: STDERR: 
	I0311 04:26:28.196448    4553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2 +20000M
	I0311 04:26:28.206937    4553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:26:28.206957    4553 main.go:141] libmachine: STDERR: 
	I0311 04:26:28.206969    4553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2
	I0311 04:26:28.206975    4553 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:26:28.207004    4553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:79:c3:b8:30:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/force-systemd-env-255000/disk.qcow2
	I0311 04:26:28.208725    4553 main.go:141] libmachine: STDOUT: 
	I0311 04:26:28.208742    4553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:26:28.208757    4553 client.go:171] duration metric: took 283.122166ms to LocalClient.Create
	I0311 04:26:30.211017    4553 start.go:128] duration metric: took 2.341653375s to createHost
	I0311 04:26:30.211128    4553 start.go:83] releasing machines lock for "force-systemd-env-255000", held for 2.342118208s
	W0311 04:26:30.211471    4553 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-255000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-255000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:30.227145    4553 out.go:177] 
	W0311 04:26:30.235163    4553 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:26:30.235296    4553 out.go:239] * 
	* 
	W0311 04:26:30.237884    4553 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:26:30.248035    4553 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-255000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-255000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-255000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.706375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-255000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-255000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-255000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-11 04:26:30.342836 -0700 PDT m=+3127.860566542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-255000 -n force-systemd-env-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-255000 -n force-systemd-env-255000: exit status 7 (35.244875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-255000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-255000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-255000
--- FAIL: TestForceSystemdEnv (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (39.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-864000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-864000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-rg49c" [64b31646-abf5-49aa-a18e-70c30e65002e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-rg49c" [64b31646-abf5-49aa-a18e-70c30e65002e] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.003182416s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:32535
functional_test.go:1657: error fetching http://192.168.105.4:32535: Get "http://192.168.105.4:32535": dial tcp 192.168.105.4:32535: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32535: Get "http://192.168.105.4:32535": dial tcp 192.168.105.4:32535: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32535: Get "http://192.168.105.4:32535": dial tcp 192.168.105.4:32535: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32535: Get "http://192.168.105.4:32535": dial tcp 192.168.105.4:32535: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32535: Get "http://192.168.105.4:32535": dial tcp 192.168.105.4:32535: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32535: Get "http://192.168.105.4:32535": dial tcp 192.168.105.4:32535: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32535: Get "http://192.168.105.4:32535": dial tcp 192.168.105.4:32535: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:32535: Get "http://192.168.105.4:32535": dial tcp 192.168.105.4:32535: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-864000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-rg49c
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-864000/192.168.105.4
Start Time:       Mon, 11 Mar 2024 03:45:45 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://5b443da1bcadb157eee4f5238eb7a36ca12120d958dfcb40f2df89be825de835
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 11 Mar 2024 03:46:10 -0700
Finished:     Mon, 11 Mar 2024 03:46:10 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bpbt2 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-bpbt2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  38s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-rg49c to functional-864000
Normal   Pulling    37s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     31s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 5.797s (6.367s including waiting)
Normal   Created    13s (x3 over 31s)  kubelet            Created container echoserver-arm
Normal   Started    13s (x3 over 31s)  kubelet            Started container echoserver-arm
Normal   Pulled     13s (x2 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    1s (x4 over 29s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-rg49c_default(64b31646-abf5-49aa-a18e-70c30e65002e)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-864000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-864000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.225.227
IPs:                      10.108.225.227
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32535/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-864000 -n functional-864000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-864000 ssh findmnt                                                                                        | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-864000                                                                                                 | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2511247930/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh findmnt                                                                                        | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh -- ls                                                                                          | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh cat                                                                                            | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | /mount-9p/test-1710153968998786000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh stat                                                                                           | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh stat                                                                                           | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh sudo                                                                                           | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh findmnt                                                                                        | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-864000                                                                                                 | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2924177558/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh findmnt                                                                                        | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh -- ls                                                                                          | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh sudo                                                                                           | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-864000                                                                                                 | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3193259508/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-864000                                                                                                 | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3193259508/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh findmnt                                                                                        | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-864000                                                                                                 | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3193259508/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh findmnt                                                                                        | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh findmnt                                                                                        | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-864000 ssh findmnt                                                                                        | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT | 11 Mar 24 03:46 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-864000                                                                                                 | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-864000                                                                                                 | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-864000                                                                                                 | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-864000 --dry-run                                                                                       | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-864000 | jenkins | v1.32.0 | 11 Mar 24 03:46 PDT |                     |
	|           | -p functional-864000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 03:46:20
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 03:46:20.412028    2387 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:46:20.412878    2387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:46:20.412902    2387 out.go:304] Setting ErrFile to fd 2...
	I0311 03:46:20.412910    2387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:46:20.413382    2387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 03:46:20.414627    2387 out.go:298] Setting JSON to false
	I0311 03:46:20.431744    2387 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":952,"bootTime":1710153028,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 03:46:20.431832    2387 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 03:46:20.436371    2387 out.go:177] * [functional-864000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 03:46:20.443421    2387 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 03:46:20.447355    2387 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 03:46:20.443473    2387 notify.go:220] Checking for updates...
	I0311 03:46:20.453362    2387 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 03:46:20.456422    2387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 03:46:20.459392    2387 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 03:46:20.462352    2387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 03:46:20.465662    2387 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 03:46:20.465927    2387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 03:46:20.470371    2387 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 03:46:20.477334    2387 start.go:297] selected driver: qemu2
	I0311 03:46:20.477339    2387 start.go:901] validating driver "qemu2" against &{Name:functional-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:46:20.477393    2387 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 03:46:20.479545    2387 cni.go:84] Creating CNI manager for ""
	I0311 03:46:20.479561    2387 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 03:46:20.479601    2387 start.go:340] cluster config:
	{Name:functional-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-864000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:46:20.490225    2387 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Mar 11 10:46:16 functional-864000 dockerd[7026]: time="2024-03-11T10:46:16.113058900Z" level=info msg="ignoring event" container=628e855e63a88d19e8019889adabda5e3dba1365833ff8c36381223b5dbe4ff0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 10:46:16 functional-864000 dockerd[7032]: time="2024-03-11T10:46:16.113343107Z" level=info msg="shim disconnected" id=628e855e63a88d19e8019889adabda5e3dba1365833ff8c36381223b5dbe4ff0 namespace=moby
	Mar 11 10:46:16 functional-864000 dockerd[7032]: time="2024-03-11T10:46:16.113388023Z" level=warning msg="cleaning up after shim disconnected" id=628e855e63a88d19e8019889adabda5e3dba1365833ff8c36381223b5dbe4ff0 namespace=moby
	Mar 11 10:46:16 functional-864000 dockerd[7032]: time="2024-03-11T10:46:16.113393190Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 11 10:46:17 functional-864000 dockerd[7032]: time="2024-03-11T10:46:17.865315248Z" level=info msg="shim disconnected" id=0d6f4e9f26c521fd15c2b85239395fb1682b6035ec784b919473f70a1a43d287 namespace=moby
	Mar 11 10:46:17 functional-864000 dockerd[7026]: time="2024-03-11T10:46:17.865516872Z" level=info msg="ignoring event" container=0d6f4e9f26c521fd15c2b85239395fb1682b6035ec784b919473f70a1a43d287 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 10:46:17 functional-864000 dockerd[7032]: time="2024-03-11T10:46:17.865788412Z" level=warning msg="cleaning up after shim disconnected" id=0d6f4e9f26c521fd15c2b85239395fb1682b6035ec784b919473f70a1a43d287 namespace=moby
	Mar 11 10:46:17 functional-864000 dockerd[7032]: time="2024-03-11T10:46:17.865798537Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 11 10:46:21 functional-864000 dockerd[7032]: time="2024-03-11T10:46:21.529160479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 10:46:21 functional-864000 dockerd[7032]: time="2024-03-11T10:46:21.529376269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 10:46:21 functional-864000 dockerd[7032]: time="2024-03-11T10:46:21.529386936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:46:21 functional-864000 dockerd[7032]: time="2024-03-11T10:46:21.529474311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:46:21 functional-864000 dockerd[7032]: time="2024-03-11T10:46:21.533332338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 10:46:21 functional-864000 dockerd[7032]: time="2024-03-11T10:46:21.533363630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 10:46:21 functional-864000 dockerd[7032]: time="2024-03-11T10:46:21.533528254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:46:21 functional-864000 dockerd[7032]: time="2024-03-11T10:46:21.533648629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:46:21 functional-864000 cri-dockerd[7233]: time="2024-03-11T10:46:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4e61bbf24606ec0bc678d9eff5efa1a28e1a7f28d834795d37f77d5ac6fd36da/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 11 10:46:21 functional-864000 cri-dockerd[7233]: time="2024-03-11T10:46:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c868b93754997ced105d492c6f447cd661697f637f34399150ab771b22f0115e/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 11 10:46:21 functional-864000 dockerd[7026]: time="2024-03-11T10:46:21.818348637Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Mar 11 10:46:23 functional-864000 cri-dockerd[7233]: time="2024-03-11T10:46:23Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Mar 11 10:46:23 functional-864000 dockerd[7032]: time="2024-03-11T10:46:23.588386416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 10:46:23 functional-864000 dockerd[7032]: time="2024-03-11T10:46:23.588441458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 10:46:23 functional-864000 dockerd[7032]: time="2024-03-11T10:46:23.588447499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:46:23 functional-864000 dockerd[7032]: time="2024-03-11T10:46:23.588476291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 10:46:23 functional-864000 dockerd[7026]: time="2024-03-11T10:46:23.755201040Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	f90dcc65a3b43       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   1 second ago         Running             dashboard-metrics-scraper   0                   4e61bbf24606e       dashboard-metrics-scraper-7fd5cb4ddc-8g9dp
	628e855e63a88       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    8 seconds ago        Exited              mount-munger                0                   0d6f4e9f26c52       busybox-mount
	0c782db936a0e       72565bf5bbedf                                                                                          10 seconds ago       Exited              echoserver-arm              2                   4bcb48074a0c6       hello-node-759d89bdcc-sb665
	5b443da1bcadb       72565bf5bbedf                                                                                          14 seconds ago       Exited              echoserver-arm              2                   61c355f5725c4       hello-node-connect-7799dfb7c6-rg49c
	c189b4b26e3f8       nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107                          30 seconds ago       Running             myfrontend                  0                   29006f72adcf7       sp-pod
	90f0763fac2be       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                          45 seconds ago       Running             nginx                       0                   94fc66b6793e0       nginx-svc
	3425eb464ef39       97e04611ad434                                                                                          About a minute ago   Running             coredns                     2                   bbc1fd850693b       coredns-5dd5756b68-qfxlh
	a0ed7dec3b15c       3ca3ca488cf13                                                                                          About a minute ago   Running             kube-proxy                  2                   2dc97ac6f9507       kube-proxy-b42xv
	427ab21420f87       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         2                   d47e88a01de3f       storage-provisioner
	c372c60633944       9961cbceaf234                                                                                          About a minute ago   Running             kube-controller-manager     2                   1cd6df0c22342       kube-controller-manager-functional-864000
	1e31b04df8cd9       9cdd6470f48c8                                                                                          About a minute ago   Running             etcd                        2                   bf89096a35ab4       etcd-functional-864000
	9e4fd918917d4       05c284c929889                                                                                          About a minute ago   Running             kube-scheduler              2                   7744c9b97174e       kube-scheduler-functional-864000
	2348097908e70       04b4c447bb9d4                                                                                          About a minute ago   Running             kube-apiserver              0                   b23b67e8b4ed7       kube-apiserver-functional-864000
	cb4bad945f67e       97e04611ad434                                                                                          2 minutes ago        Exited              coredns                     1                   803588bb21d86       coredns-5dd5756b68-qfxlh
	01e126b0a7870       ba04bb24b9575                                                                                          2 minutes ago        Exited              storage-provisioner         1                   5f759e9957cc1       storage-provisioner
	1d2560ee7bfd8       3ca3ca488cf13                                                                                          2 minutes ago        Exited              kube-proxy                  1                   4b02781ed47fe       kube-proxy-b42xv
	be90ea69d25d6       05c284c929889                                                                                          2 minutes ago        Exited              kube-scheduler              1                   986bcb31be6fe       kube-scheduler-functional-864000
	fe7cf2437cb30       9961cbceaf234                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   abfafc15ed032       kube-controller-manager-functional-864000
	97a499d4c0c37       9cdd6470f48c8                                                                                          2 minutes ago        Exited              etcd                        1                   c2f33e642c09f       etcd-functional-864000
	
	
	==> coredns [3425eb464ef3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35641 - 63234 "HINFO IN 1573641646540520748.9093890841930552239. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005052798s
	[INFO] 10.244.0.1:25854 - 15798 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000130374s
	[INFO] 10.244.0.1:59192 - 16226 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000078958s
	[INFO] 10.244.0.1:21187 - 24428 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000818786s
	[INFO] 10.244.0.1:39080 - 61946 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000028208s
	[INFO] 10.244.0.1:57718 - 42759 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000046958s
	[INFO] 10.244.0.1:34581 - 64998 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000154207s
	
	
	==> coredns [cb4bad945f67] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53172 - 3419 "HINFO IN 6108453318128786896.3004070877496865590. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.003979523s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-864000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-864000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=functional-864000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T03_43_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 10:43:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-864000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 10:46:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 10:46:11 +0000   Mon, 11 Mar 2024 10:43:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 10:46:11 +0000   Mon, 11 Mar 2024 10:43:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 10:46:11 +0000   Mon, 11 Mar 2024 10:43:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 10:46:11 +0000   Mon, 11 Mar 2024 10:43:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-864000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6ef3733b2f84a7ca940881e626f470d
	  System UUID:                d6ef3733b2f84a7ca940881e626f470d
	  Boot ID:                    54b8ed9e-a291-4c86-b617-d9a9eb444e0d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-sb665                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  default                     hello-node-connect-7799dfb7c6-rg49c           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 coredns-5dd5756b68-qfxlh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m24s
	  kube-system                 etcd-functional-864000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m38s
	  kube-system                 kube-apiserver-functional-864000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-functional-864000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kube-proxy-b42xv                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-scheduler-functional-864000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-8g9dp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-xp2fq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m23s                kube-proxy       
	  Normal  Starting                 72s                  kube-proxy       
	  Normal  Starting                 2m2s                 kube-proxy       
	  Normal  Starting                 2m39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m38s                kubelet          Node functional-864000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s                kubelet          Node functional-864000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s                kubelet          Node functional-864000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m34s                kubelet          Node functional-864000 status is now: NodeReady
	  Normal  RegisteredNode           2m25s                node-controller  Node functional-864000 event: Registered Node functional-864000 in Controller
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node functional-864000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m6s (x9 over 2m6s)  kubelet          Node functional-864000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x7 over 2m6s)  kubelet          Node functional-864000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           111s                 node-controller  Node functional-864000 event: Registered Node functional-864000 in Controller
	  Normal  Starting                 77s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)    kubelet          Node functional-864000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)    kubelet          Node functional-864000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)    kubelet          Node functional-864000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                  node-controller  Node functional-864000 event: Registered Node functional-864000 in Controller
	
	
	==> dmesg <==
	[ +11.499083] kauditd_printk_skb: 27 callbacks suppressed
	[  +3.357895] systemd-fstab-generator[5904]: Ignoring "noauto" option for root device
	[ +17.269529] systemd-fstab-generator[6554]: Ignoring "noauto" option for root device
	[  +0.054625] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.119651] systemd-fstab-generator[6589]: Ignoring "noauto" option for root device
	[  +0.102157] systemd-fstab-generator[6602]: Ignoring "noauto" option for root device
	[  +0.106847] systemd-fstab-generator[6615]: Ignoring "noauto" option for root device
	[  +5.088729] kauditd_printk_skb: 89 callbacks suppressed
	[Mar11 10:45] systemd-fstab-generator[7186]: Ignoring "noauto" option for root device
	[  +0.097041] systemd-fstab-generator[7198]: Ignoring "noauto" option for root device
	[  +0.085611] systemd-fstab-generator[7210]: Ignoring "noauto" option for root device
	[  +0.101317] systemd-fstab-generator[7225]: Ignoring "noauto" option for root device
	[  +0.215992] systemd-fstab-generator[7375]: Ignoring "noauto" option for root device
	[  +0.863287] systemd-fstab-generator[7492]: Ignoring "noauto" option for root device
	[  +4.471068] kauditd_printk_skb: 202 callbacks suppressed
	[ +11.008680] kauditd_printk_skb: 27 callbacks suppressed
	[  +4.348498] systemd-fstab-generator[8756]: Ignoring "noauto" option for root device
	[  +4.665364] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.538123] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.259731] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.206805] kauditd_printk_skb: 19 callbacks suppressed
	[Mar11 10:46] kauditd_printk_skb: 25 callbacks suppressed
	[  +9.523754] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.663443] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.135677] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [1e31b04df8cd] <==
	{"level":"info","ts":"2024-03-11T10:45:08.113679Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-03-11T10:45:08.116336Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T10:45:08.116375Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T10:45:08.113619Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T10:45:08.113501Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-11T10:45:08.116484Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-11T10:45:08.113502Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T10:45:08.116515Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T10:45:08.11655Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T10:45:08.113648Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T10:45:08.113449Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-11T10:45:09.974188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-11T10:45:09.97437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-11T10:45:09.974434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-11T10:45:09.974467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-03-11T10:45:09.974483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-11T10:45:09.974509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-03-11T10:45:09.974542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-11T10:45:09.979788Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-864000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T10:45:09.979854Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T10:45:09.98011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T10:45:09.980149Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T10:45:09.980186Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T10:45:09.982219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-03-11T10:45:09.982687Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [97a499d4c0c3] <==
	{"level":"info","ts":"2024-03-11T10:44:19.120229Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T10:44:20.317947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-11T10:44:20.318112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-11T10:44:20.318155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-03-11T10:44:20.318189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-03-11T10:44:20.318205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-11T10:44:20.31823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-03-11T10:44:20.318285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-11T10:44:20.320481Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T10:44:20.320744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T10:44:20.322651Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T10:44:20.322697Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-03-11T10:44:20.320487Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-864000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T10:44:20.323143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T10:44:20.323429Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T10:44:54.491444Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-11T10:44:54.49148Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-864000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-03-11T10:44:54.491523Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T10:44:54.491566Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T10:44:54.501399Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T10:44:54.50142Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-11T10:44:54.502572Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-03-11T10:44:54.503964Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-11T10:44:54.503998Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-11T10:44:54.504002Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-864000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 10:46:24 up 2 min,  0 users,  load average: 0.59, 0.38, 0.16
	Linux functional-864000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2348097908e7] <==
	I0311 10:45:10.639207       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 10:45:10.639484       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E0311 10:45:10.640489       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0311 10:45:10.641173       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 10:45:10.641205       1 aggregator.go:166] initial CRD sync complete...
	I0311 10:45:10.641217       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 10:45:10.641223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 10:45:10.641229       1 cache.go:39] Caches are synced for autoregister controller
	I0311 10:45:10.705862       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0311 10:45:11.540360       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0311 10:45:11.945600       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0311 10:45:11.948777       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0311 10:45:11.960428       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0311 10:45:11.968524       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 10:45:11.970648       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0311 10:45:22.708182       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0311 10:45:22.919805       1 controller.go:624] quota admission added evaluator for: endpoints
	I0311 10:45:31.721049       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.116.16"}
	I0311 10:45:36.257585       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.104.49"}
	I0311 10:45:45.661267       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0311 10:45:45.753293       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.225.227"}
	I0311 10:46:00.897111       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.151.164"}
	I0311 10:46:21.124704       1 controller.go:624] quota admission added evaluator for: namespaces
	I0311 10:46:21.222057       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.248.123"}
	I0311 10:46:21.234999       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.76.59"}
	
	
	==> kube-controller-manager [c372c6063394] <==
	E0311 10:46:21.160637       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 10:46:21.164739       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0311 10:46:21.164841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="4.192277ms"
	E0311 10:46:21.164890       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 10:46:21.164903       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0311 10:46:21.170114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.602673ms"
	E0311 10:46:21.170129       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 10:46:21.170217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="4.495817ms"
	E0311 10:46:21.170244       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 10:46:21.170368       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0311 10:46:21.174208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="3.478071ms"
	E0311 10:46:21.174267       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 10:46:21.174246       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0311 10:46:21.178796       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-xp2fq"
	I0311 10:46:21.185309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.013047ms"
	I0311 10:46:21.194187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.852801ms"
	I0311 10:46:21.194224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.458µs"
	I0311 10:46:21.195703       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-8g9dp"
	I0311 10:46:21.199755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="7.595348ms"
	I0311 10:46:21.204069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="4.262359ms"
	I0311 10:46:21.216113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="11.947873ms"
	I0311 10:46:21.216148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="15.292µs"
	I0311 10:46:22.373861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="40.292µs"
	I0311 10:46:23.838009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="2.987826ms"
	I0311 10:46:23.838111       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="27.667µs"
	
	
	==> kube-controller-manager [fe7cf2437cb3] <==
	I0311 10:44:33.394487       1 taint_manager.go:210] "Sending events to api server"
	I0311 10:44:33.394584       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0311 10:44:33.394614       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-864000"
	I0311 10:44:33.394633       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0311 10:44:33.394775       1 event.go:307] "Event occurred" object="functional-864000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-864000 event: Registered Node functional-864000 in Controller"
	I0311 10:44:33.395398       1 shared_informer.go:318] Caches are synced for endpoint
	I0311 10:44:33.395806       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0311 10:44:33.398587       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0311 10:44:33.401089       1 shared_informer.go:318] Caches are synced for HPA
	I0311 10:44:33.402303       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0311 10:44:33.402767       1 shared_informer.go:318] Caches are synced for PV protection
	I0311 10:44:33.444443       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0311 10:44:33.444446       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0311 10:44:33.445490       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0311 10:44:33.445502       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0311 10:44:33.497416       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0311 10:44:33.554707       1 shared_informer.go:318] Caches are synced for disruption
	I0311 10:44:33.558828       1 shared_informer.go:318] Caches are synced for job
	I0311 10:44:33.558957       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0311 10:44:33.591536       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 10:44:33.597033       1 shared_informer.go:318] Caches are synced for cronjob
	I0311 10:44:33.604861       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 10:44:33.916980       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 10:44:33.941875       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 10:44:33.941921       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-proxy [1d2560ee7bfd] <==
	I0311 10:44:22.066801       1 server_others.go:69] "Using iptables proxy"
	I0311 10:44:22.076231       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0311 10:44:22.098568       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 10:44:22.098581       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 10:44:22.100138       1 server_others.go:152] "Using iptables Proxier"
	I0311 10:44:22.100160       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 10:44:22.100243       1 server.go:846] "Version info" version="v1.28.4"
	I0311 10:44:22.100248       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 10:44:22.100653       1 config.go:188] "Starting service config controller"
	I0311 10:44:22.100666       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 10:44:22.100673       1 config.go:97] "Starting endpoint slice config controller"
	I0311 10:44:22.100676       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 10:44:22.100982       1 config.go:315] "Starting node config controller"
	I0311 10:44:22.100991       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 10:44:22.200754       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 10:44:22.200783       1 shared_informer.go:318] Caches are synced for service config
	I0311 10:44:22.201003       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [a0ed7dec3b15] <==
	I0311 10:45:11.886465       1 server_others.go:69] "Using iptables proxy"
	I0311 10:45:11.895709       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0311 10:45:11.916843       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 10:45:11.916855       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 10:45:11.917529       1 server_others.go:152] "Using iptables Proxier"
	I0311 10:45:11.917569       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 10:45:11.917660       1 server.go:846] "Version info" version="v1.28.4"
	I0311 10:45:11.917722       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 10:45:11.918038       1 config.go:188] "Starting service config controller"
	I0311 10:45:11.918062       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 10:45:11.918084       1 config.go:97] "Starting endpoint slice config controller"
	I0311 10:45:11.918098       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 10:45:11.918314       1 config.go:315] "Starting node config controller"
	I0311 10:45:11.918344       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 10:45:12.019163       1 shared_informer.go:318] Caches are synced for node config
	I0311 10:45:12.019165       1 shared_informer.go:318] Caches are synced for service config
	I0311 10:45:12.019176       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9e4fd918917d] <==
	W0311 10:45:10.608446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 10:45:10.611510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 10:45:10.608482       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 10:45:10.611544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 10:45:10.608505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 10:45:10.611591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 10:45:10.608548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 10:45:10.611624       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 10:45:10.608647       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 10:45:10.611674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 10:45:10.608724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 10:45:10.611708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 10:45:10.608867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 10:45:10.611769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 10:45:10.611278       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 10:45:10.611798       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 10:45:10.611374       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 10:45:10.611844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 10:45:10.611901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 10:45:10.611934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 10:45:10.611969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 10:45:10.611992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 10:45:10.612039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 10:45:10.612059       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0311 10:45:11.498988       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [be90ea69d25d] <==
	I0311 10:44:19.889489       1 serving.go:348] Generated self-signed cert in-memory
	W0311 10:44:20.936580       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 10:44:20.936595       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 10:44:20.936601       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 10:44:20.936604       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 10:44:20.980265       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0311 10:44:20.980282       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 10:44:20.981291       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 10:44:20.981387       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 10:44:20.981397       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 10:44:20.981405       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 10:44:21.081517       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 10:44:54.496494       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0311 10:44:54.496518       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0311 10:44:54.496576       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 11 10:46:10 functional-864000 kubelet[7499]: I0311 10:46:10.370559    7499 scope.go:117] "RemoveContainer" containerID="9165d7ef8e99beac253d575c7b05664edc3aec047f72cb0a3fada55748633590"
	Mar 11 10:46:10 functional-864000 kubelet[7499]: I0311 10:46:10.747694    7499 scope.go:117] "RemoveContainer" containerID="9165d7ef8e99beac253d575c7b05664edc3aec047f72cb0a3fada55748633590"
	Mar 11 10:46:10 functional-864000 kubelet[7499]: I0311 10:46:10.747824    7499 scope.go:117] "RemoveContainer" containerID="5b443da1bcadb157eee4f5238eb7a36ca12120d958dfcb40f2df89be825de835"
	Mar 11 10:46:10 functional-864000 kubelet[7499]: E0311 10:46:10.747908    7499 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-rg49c_default(64b31646-abf5-49aa-a18e-70c30e65002e)\"" pod="default/hello-node-connect-7799dfb7c6-rg49c" podUID="64b31646-abf5-49aa-a18e-70c30e65002e"
	Mar 11 10:46:14 functional-864000 kubelet[7499]: I0311 10:46:14.369443    7499 scope.go:117] "RemoveContainer" containerID="45a0a189f5303c4027530a000fabce1b607372375d7992e3503e42f311fa4d97"
	Mar 11 10:46:14 functional-864000 kubelet[7499]: I0311 10:46:14.774077    7499 scope.go:117] "RemoveContainer" containerID="45a0a189f5303c4027530a000fabce1b607372375d7992e3503e42f311fa4d97"
	Mar 11 10:46:14 functional-864000 kubelet[7499]: I0311 10:46:14.774236    7499 scope.go:117] "RemoveContainer" containerID="0c782db936a0ec98618fa6ba93025d1283d9dd97e7b7cdf67daea65aee478b02"
	Mar 11 10:46:14 functional-864000 kubelet[7499]: E0311 10:46:14.774322    7499 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-sb665_default(51c78233-5012-49d5-93c1-d96157bd2e65)\"" pod="default/hello-node-759d89bdcc-sb665" podUID="51c78233-5012-49d5-93c1-d96157bd2e65"
	Mar 11 10:46:18 functional-864000 kubelet[7499]: I0311 10:46:18.034213    7499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4f5eab2b-7379-4a7a-89e3-3e46a1e308db-test-volume\") pod \"4f5eab2b-7379-4a7a-89e3-3e46a1e308db\" (UID: \"4f5eab2b-7379-4a7a-89e3-3e46a1e308db\") "
	Mar 11 10:46:18 functional-864000 kubelet[7499]: I0311 10:46:18.034240    7499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt76b\" (UniqueName: \"kubernetes.io/projected/4f5eab2b-7379-4a7a-89e3-3e46a1e308db-kube-api-access-xt76b\") pod \"4f5eab2b-7379-4a7a-89e3-3e46a1e308db\" (UID: \"4f5eab2b-7379-4a7a-89e3-3e46a1e308db\") "
	Mar 11 10:46:18 functional-864000 kubelet[7499]: I0311 10:46:18.034410    7499 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5eab2b-7379-4a7a-89e3-3e46a1e308db-test-volume" (OuterVolumeSpecName: "test-volume") pod "4f5eab2b-7379-4a7a-89e3-3e46a1e308db" (UID: "4f5eab2b-7379-4a7a-89e3-3e46a1e308db"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 11 10:46:18 functional-864000 kubelet[7499]: I0311 10:46:18.035038    7499 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5eab2b-7379-4a7a-89e3-3e46a1e308db-kube-api-access-xt76b" (OuterVolumeSpecName: "kube-api-access-xt76b") pod "4f5eab2b-7379-4a7a-89e3-3e46a1e308db" (UID: "4f5eab2b-7379-4a7a-89e3-3e46a1e308db"). InnerVolumeSpecName "kube-api-access-xt76b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 10:46:18 functional-864000 kubelet[7499]: I0311 10:46:18.134339    7499 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xt76b\" (UniqueName: \"kubernetes.io/projected/4f5eab2b-7379-4a7a-89e3-3e46a1e308db-kube-api-access-xt76b\") on node \"functional-864000\" DevicePath \"\""
	Mar 11 10:46:18 functional-864000 kubelet[7499]: I0311 10:46:18.134356    7499 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4f5eab2b-7379-4a7a-89e3-3e46a1e308db-test-volume\") on node \"functional-864000\" DevicePath \"\""
	Mar 11 10:46:18 functional-864000 kubelet[7499]: I0311 10:46:18.798702    7499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6f4e9f26c521fd15c2b85239395fb1682b6035ec784b919473f70a1a43d287"
	Mar 11 10:46:21 functional-864000 kubelet[7499]: I0311 10:46:21.181930    7499 topology_manager.go:215] "Topology Admit Handler" podUID="d91f9a18-2c6a-45bf-a759-0e657d91950c" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-xp2fq"
	Mar 11 10:46:21 functional-864000 kubelet[7499]: E0311 10:46:21.181964    7499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f5eab2b-7379-4a7a-89e3-3e46a1e308db" containerName="mount-munger"
	Mar 11 10:46:21 functional-864000 kubelet[7499]: I0311 10:46:21.181983    7499 memory_manager.go:346] "RemoveStaleState removing state" podUID="4f5eab2b-7379-4a7a-89e3-3e46a1e308db" containerName="mount-munger"
	Mar 11 10:46:21 functional-864000 kubelet[7499]: I0311 10:46:21.199617    7499 topology_manager.go:215] "Topology Admit Handler" podUID="5f51ed97-3791-494e-8347-a277f04ee8bd" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-8g9dp"
	Mar 11 10:46:21 functional-864000 kubelet[7499]: I0311 10:46:21.351006    7499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f51ed97-3791-494e-8347-a277f04ee8bd-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-8g9dp\" (UID: \"5f51ed97-3791-494e-8347-a277f04ee8bd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-8g9dp"
	Mar 11 10:46:21 functional-864000 kubelet[7499]: I0311 10:46:21.351044    7499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rpw5\" (UniqueName: \"kubernetes.io/projected/5f51ed97-3791-494e-8347-a277f04ee8bd-kube-api-access-9rpw5\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-8g9dp\" (UID: \"5f51ed97-3791-494e-8347-a277f04ee8bd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-8g9dp"
	Mar 11 10:46:21 functional-864000 kubelet[7499]: I0311 10:46:21.351057    7499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8878c\" (UniqueName: \"kubernetes.io/projected/d91f9a18-2c6a-45bf-a759-0e657d91950c-kube-api-access-8878c\") pod \"kubernetes-dashboard-8694d4445c-xp2fq\" (UID: \"d91f9a18-2c6a-45bf-a759-0e657d91950c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xp2fq"
	Mar 11 10:46:21 functional-864000 kubelet[7499]: I0311 10:46:21.351067    7499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d91f9a18-2c6a-45bf-a759-0e657d91950c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-xp2fq\" (UID: \"d91f9a18-2c6a-45bf-a759-0e657d91950c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xp2fq"
	Mar 11 10:46:22 functional-864000 kubelet[7499]: I0311 10:46:22.369391    7499 scope.go:117] "RemoveContainer" containerID="5b443da1bcadb157eee4f5238eb7a36ca12120d958dfcb40f2df89be825de835"
	Mar 11 10:46:22 functional-864000 kubelet[7499]: E0311 10:46:22.369491    7499 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-rg49c_default(64b31646-abf5-49aa-a18e-70c30e65002e)\"" pod="default/hello-node-connect-7799dfb7c6-rg49c" podUID="64b31646-abf5-49aa-a18e-70c30e65002e"
	
	
	==> storage-provisioner [01e126b0a787] <==
	I0311 10:44:22.027716       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 10:44:22.047956       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 10:44:22.048536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 10:44:39.440452       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 10:44:39.440603       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-864000_9c842216-27a8-40e0-919d-5eec42d7d657!
	I0311 10:44:39.440791       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"affb5ad1-d0d2-4806-ac41-aeef6d147fb5", APIVersion:"v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-864000_9c842216-27a8-40e0-919d-5eec42d7d657 became leader
	I0311 10:44:39.541667       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-864000_9c842216-27a8-40e0-919d-5eec42d7d657!
	
	
	==> storage-provisioner [427ab21420f8] <==
	I0311 10:45:11.824980       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 10:45:11.834972       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 10:45:11.835043       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 10:45:29.219713       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 10:45:29.220178       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"affb5ad1-d0d2-4806-ac41-aeef6d147fb5", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-864000_26830f60-69f5-4d5d-aa46-7f906d06132f became leader
	I0311 10:45:29.220241       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-864000_26830f60-69f5-4d5d-aa46-7f906d06132f!
	I0311 10:45:29.320401       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-864000_26830f60-69f5-4d5d-aa46-7f906d06132f!
	I0311 10:45:42.079287       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0311 10:45:42.079425       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    b189a457-123d-4cf8-be02-7e51228970d6 337 0 2024-03-11 10:44:00 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-03-11 10:44:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-12b81c18-6b2d-4d99-8c06-544484f2d7b1 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  12b81c18-6b2d-4d99-8c06-544484f2d7b1 644 0 2024-03-11 10:45:42 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-03-11 10:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-03-11 10:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0311 10:45:42.080055       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-12b81c18-6b2d-4d99-8c06-544484f2d7b1" provisioned
	I0311 10:45:42.080117       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0311 10:45:42.080140       1 volume_store.go:212] Trying to save persistentvolume "pvc-12b81c18-6b2d-4d99-8c06-544484f2d7b1"
	I0311 10:45:42.079961       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"12b81c18-6b2d-4d99-8c06-544484f2d7b1", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0311 10:45:42.083896       1 volume_store.go:219] persistentvolume "pvc-12b81c18-6b2d-4d99-8c06-544484f2d7b1" saved
	I0311 10:45:42.084150       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"12b81c18-6b2d-4d99-8c06-544484f2d7b1", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-12b81c18-6b2d-4d99-8c06-544484f2d7b1
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-864000 -n functional-864000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-864000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-8694d4445c-xp2fq
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-864000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-xp2fq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-864000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-xp2fq: exit status 1 (48.664834ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-864000/192.168.105.4
	Start Time:       Mon, 11 Mar 2024 03:46:10 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://628e855e63a88d19e8019889adabda5e3dba1365833ff8c36381223b5dbe4ff0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 11 Mar 2024 03:46:16 -0700
	      Finished:     Mon, 11 Mar 2024 03:46:16 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xt76b (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xt76b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  14s   default-scheduler  Successfully assigned default/busybox-mount to functional-864000
	  Normal  Pulling    14s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 5.552s (5.552s including waiting)
	  Normal  Created    8s    kubelet            Created container mount-munger
	  Normal  Started    8s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-xp2fq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-864000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-xp2fq: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (39.20s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-600000 node stop m02 -v=7 --alsologtostderr: (12.186920458s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr
E0311 03:53:19.648809    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:54:15.863009    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:55:35.777556    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr: exit status 7 (2m55.968084542s)

                                                
                                                
-- stdout --
	ha-600000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-600000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-600000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 03:52:41.251811    2955 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:52:41.252207    2955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:52:41.252211    2955 out.go:304] Setting ErrFile to fd 2...
	I0311 03:52:41.252213    2955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:52:41.252358    2955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 03:52:41.252473    2955 out.go:298] Setting JSON to false
	I0311 03:52:41.252485    2955 mustload.go:65] Loading cluster: ha-600000
	I0311 03:52:41.252538    2955 notify.go:220] Checking for updates...
	I0311 03:52:41.252707    2955 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 03:52:41.252713    2955 status.go:255] checking status of ha-600000 ...
	I0311 03:52:41.253497    2955 status.go:330] ha-600000 host status = "Running" (err=<nil>)
	I0311 03:52:41.253503    2955 host.go:66] Checking if "ha-600000" exists ...
	I0311 03:52:41.253606    2955 host.go:66] Checking if "ha-600000" exists ...
	I0311 03:52:41.253709    2955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 03:52:41.253717    2955 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/id_rsa Username:docker}
	W0311 03:53:07.177016    2955 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0311 03:53:07.177382    2955 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 03:53:07.177399    2955 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0311 03:53:07.177403    2955 status.go:257] ha-600000 status: &{Name:ha-600000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 03:53:07.177413    2955 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0311 03:53:07.177423    2955 status.go:255] checking status of ha-600000-m02 ...
	I0311 03:53:07.177645    2955 status.go:330] ha-600000-m02 host status = "Stopped" (err=<nil>)
	I0311 03:53:07.177652    2955 status.go:343] host is not running, skipping remaining checks
	I0311 03:53:07.177654    2955 status.go:257] ha-600000-m02 status: &{Name:ha-600000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 03:53:07.177658    2955 status.go:255] checking status of ha-600000-m03 ...
	I0311 03:53:07.179140    2955 status.go:330] ha-600000-m03 host status = "Running" (err=<nil>)
	I0311 03:53:07.179154    2955 host.go:66] Checking if "ha-600000-m03" exists ...
	I0311 03:53:07.179354    2955 host.go:66] Checking if "ha-600000-m03" exists ...
	I0311 03:53:07.179488    2955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 03:53:07.179503    2955 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m03/id_rsa Username:docker}
	W0311 03:54:22.178003    2955 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0311 03:54:22.178052    2955 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0311 03:54:22.178061    2955 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0311 03:54:22.178065    2955 status.go:257] ha-600000-m03 status: &{Name:ha-600000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 03:54:22.178074    2955 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0311 03:54:22.178078    2955 status.go:255] checking status of ha-600000-m04 ...
	I0311 03:54:22.178836    2955 status.go:330] ha-600000-m04 host status = "Running" (err=<nil>)
	I0311 03:54:22.178844    2955 host.go:66] Checking if "ha-600000-m04" exists ...
	I0311 03:54:22.178930    2955 host.go:66] Checking if "ha-600000-m04" exists ...
	I0311 03:54:22.179038    2955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 03:54:22.179044    2955 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m04/id_rsa Username:docker}
	W0311 03:55:37.178318    2955 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0311 03:55:37.178365    2955 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0311 03:55:37.178374    2955 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0311 03:55:37.178378    2955 status.go:257] ha-600000-m04 status: &{Name:ha-600000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0311 03:55:37.178387    2955 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr": ha-600000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-600000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-600000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr": ha-600000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-600000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-600000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr": ha-600000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-600000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-600000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 3 (25.961158291s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 03:56:03.138995    3001 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 03:56:03.139006    3001 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.67s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0311 03:56:03.485058    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.698740292s)
ha_test.go:413: expected profile "ha-600000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-600000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-600000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-600000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"
\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 3 (25.966503709s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 03:57:46.797225    3037 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 03:57:46.797268    3037 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.67s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (209.03s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-600000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.115897417s)

                                                
                                                
-- stdout --
	* Starting "ha-600000-m02" control-plane node in "ha-600000" cluster
	* Restarting existing qemu2 VM for "ha-600000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-600000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 03:57:46.863911    3043 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:57:46.864214    3043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:57:46.864219    3043 out.go:304] Setting ErrFile to fd 2...
	I0311 03:57:46.864222    3043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:57:46.864370    3043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 03:57:46.864681    3043 mustload.go:65] Loading cluster: ha-600000
	I0311 03:57:46.864965    3043 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0311 03:57:46.865231    3043 host.go:58] "ha-600000-m02" host status: Stopped
	I0311 03:57:46.869762    3043 out.go:177] * Starting "ha-600000-m02" control-plane node in "ha-600000" cluster
	I0311 03:57:46.872705    3043 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 03:57:46.872726    3043 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 03:57:46.872744    3043 cache.go:56] Caching tarball of preloaded images
	I0311 03:57:46.872854    3043 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 03:57:46.872863    3043 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 03:57:46.872936    3043 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/ha-600000/config.json ...
	I0311 03:57:46.873339    3043 start.go:360] acquireMachinesLock for ha-600000-m02: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 03:57:46.873390    3043 start.go:364] duration metric: took 35.292µs to acquireMachinesLock for "ha-600000-m02"
	I0311 03:57:46.873406    3043 start.go:96] Skipping create...Using existing machine configuration
	I0311 03:57:46.873409    3043 fix.go:54] fixHost starting: m02
	I0311 03:57:46.873569    3043 fix.go:112] recreateIfNeeded on ha-600000-m02: state=Stopped err=<nil>
	W0311 03:57:46.873576    3043 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 03:57:46.877675    3043 out.go:177] * Restarting existing qemu2 VM for "ha-600000-m02" ...
	I0311 03:57:46.881772    3043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:60:f4:67:33:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/disk.qcow2
	I0311 03:57:46.884020    3043 main.go:141] libmachine: STDOUT: 
	I0311 03:57:46.884044    3043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 03:57:46.884069    3043 fix.go:56] duration metric: took 10.659458ms for fixHost
	I0311 03:57:46.884074    3043 start.go:83] releasing machines lock for "ha-600000-m02", held for 10.678708ms
	W0311 03:57:46.884079    3043 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 03:57:46.884107    3043 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 03:57:46.884110    3043 start.go:728] Will try again in 5 seconds ...
	I0311 03:57:51.886032    3043 start.go:360] acquireMachinesLock for ha-600000-m02: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 03:57:51.886228    3043 start.go:364] duration metric: took 159.291µs to acquireMachinesLock for "ha-600000-m02"
	I0311 03:57:51.886323    3043 start.go:96] Skipping create...Using existing machine configuration
	I0311 03:57:51.886333    3043 fix.go:54] fixHost starting: m02
	I0311 03:57:51.886659    3043 fix.go:112] recreateIfNeeded on ha-600000-m02: state=Stopped err=<nil>
	W0311 03:57:51.886673    3043 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 03:57:51.891289    3043 out.go:177] * Restarting existing qemu2 VM for "ha-600000-m02" ...
	I0311 03:57:51.895358    3043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:60:f4:67:33:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/disk.qcow2
	I0311 03:57:51.898352    3043 main.go:141] libmachine: STDOUT: 
	I0311 03:57:51.898437    3043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 03:57:51.898474    3043 fix.go:56] duration metric: took 12.1295ms for fixHost
	I0311 03:57:51.898482    3043 start.go:83] releasing machines lock for "ha-600000-m02", held for 12.243709ms
	W0311 03:57:51.898548    3043 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 03:57:51.904375    3043 out.go:177] 
	W0311 03:57:51.908386    3043 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 03:57:51.908444    3043 out.go:239] * 
	* 
	W0311 03:57:51.910555    3043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 03:57:51.913233    3043 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0311 03:57:46.863911    3043 out.go:291] Setting OutFile to fd 1 ...
I0311 03:57:46.864214    3043 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:57:46.864219    3043 out.go:304] Setting ErrFile to fd 2...
I0311 03:57:46.864222    3043 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:57:46.864370    3043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
I0311 03:57:46.864681    3043 mustload.go:65] Loading cluster: ha-600000
I0311 03:57:46.864965    3043 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
W0311 03:57:46.865231    3043 host.go:58] "ha-600000-m02" host status: Stopped
I0311 03:57:46.869762    3043 out.go:177] * Starting "ha-600000-m02" control-plane node in "ha-600000" cluster
I0311 03:57:46.872705    3043 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0311 03:57:46.872726    3043 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0311 03:57:46.872744    3043 cache.go:56] Caching tarball of preloaded images
I0311 03:57:46.872854    3043 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0311 03:57:46.872863    3043 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0311 03:57:46.872936    3043 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/ha-600000/config.json ...
I0311 03:57:46.873339    3043 start.go:360] acquireMachinesLock for ha-600000-m02: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0311 03:57:46.873390    3043 start.go:364] duration metric: took 35.292µs to acquireMachinesLock for "ha-600000-m02"
I0311 03:57:46.873406    3043 start.go:96] Skipping create...Using existing machine configuration
I0311 03:57:46.873409    3043 fix.go:54] fixHost starting: m02
I0311 03:57:46.873569    3043 fix.go:112] recreateIfNeeded on ha-600000-m02: state=Stopped err=<nil>
W0311 03:57:46.873576    3043 fix.go:138] unexpected machine state, will restart: <nil>
I0311 03:57:46.877675    3043 out.go:177] * Restarting existing qemu2 VM for "ha-600000-m02" ...
I0311 03:57:46.881772    3043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:60:f4:67:33:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/disk.qcow2
I0311 03:57:46.884020    3043 main.go:141] libmachine: STDOUT: 
I0311 03:57:46.884044    3043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0311 03:57:46.884069    3043 fix.go:56] duration metric: took 10.659458ms for fixHost
I0311 03:57:46.884074    3043 start.go:83] releasing machines lock for "ha-600000-m02", held for 10.678708ms
W0311 03:57:46.884079    3043 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0311 03:57:46.884107    3043 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0311 03:57:46.884110    3043 start.go:728] Will try again in 5 seconds ...
I0311 03:57:51.886032    3043 start.go:360] acquireMachinesLock for ha-600000-m02: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0311 03:57:51.886228    3043 start.go:364] duration metric: took 159.291µs to acquireMachinesLock for "ha-600000-m02"
I0311 03:57:51.886323    3043 start.go:96] Skipping create...Using existing machine configuration
I0311 03:57:51.886333    3043 fix.go:54] fixHost starting: m02
I0311 03:57:51.886659    3043 fix.go:112] recreateIfNeeded on ha-600000-m02: state=Stopped err=<nil>
W0311 03:57:51.886673    3043 fix.go:138] unexpected machine state, will restart: <nil>
I0311 03:57:51.891289    3043 out.go:177] * Restarting existing qemu2 VM for "ha-600000-m02" ...
I0311 03:57:51.895358    3043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:60:f4:67:33:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m02/disk.qcow2
I0311 03:57:51.898352    3043 main.go:141] libmachine: STDOUT: 
I0311 03:57:51.898437    3043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0311 03:57:51.898474    3043 fix.go:56] duration metric: took 12.1295ms for fixHost
I0311 03:57:51.898482    3043 start.go:83] releasing machines lock for "ha-600000-m02", held for 12.243709ms
W0311 03:57:51.898548    3043 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0311 03:57:51.904375    3043 out.go:177] 
W0311 03:57:51.908386    3043 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0311 03:57:51.908444    3043 out.go:239] * 
* 
W0311 03:57:51.910555    3043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0311 03:57:51.913233    3043 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-600000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr
E0311 03:59:15.853732    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 04:00:35.766853    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 04:00:38.921907    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr: exit status 7 (2m57.914388125s)

                                                
                                                
-- stdout --
	ha-600000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-600000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-600000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 03:57:51.960493    3049 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:57:51.960883    3049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:57:51.960891    3049 out.go:304] Setting ErrFile to fd 2...
	I0311 03:57:51.960897    3049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:57:51.961154    3049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 03:57:51.961327    3049 out.go:298] Setting JSON to false
	I0311 03:57:51.961343    3049 mustload.go:65] Loading cluster: ha-600000
	I0311 03:57:51.961387    3049 notify.go:220] Checking for updates...
	I0311 03:57:51.961597    3049 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 03:57:51.961604    3049 status.go:255] checking status of ha-600000 ...
	I0311 03:57:51.962543    3049 status.go:330] ha-600000 host status = "Running" (err=<nil>)
	I0311 03:57:51.962552    3049 host.go:66] Checking if "ha-600000" exists ...
	I0311 03:57:51.962673    3049 host.go:66] Checking if "ha-600000" exists ...
	I0311 03:57:51.962792    3049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 03:57:51.962804    3049 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/id_rsa Username:docker}
	W0311 03:57:51.962989    3049 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 03:57:51.963005    3049 retry.go:31] will retry after 175.209324ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 03:57:52.139165    3049 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 03:57:52.139187    3049 retry.go:31] will retry after 315.890807ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 03:57:52.455263    3049 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 03:57:52.455282    3049 retry.go:31] will retry after 647.322132ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 03:57:53.104750    3049 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 03:57:53.104800    3049 retry.go:31] will retry after 142.47512ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0311 03:57:53.249350    3049 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/id_rsa Username:docker}
	W0311 03:57:53.249687    3049 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 03:57:53.249697    3049 retry.go:31] will retry after 188.742102ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 03:57:53.440599    3049 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 03:57:53.440620    3049 retry.go:31] will retry after 454.60331ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 03:58:19.814430    3049 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0311 03:58:19.814494    3049 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 03:58:19.814507    3049 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0311 03:58:19.814512    3049 status.go:257] ha-600000 status: &{Name:ha-600000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 03:58:19.814524    3049 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0311 03:58:19.814529    3049 status.go:255] checking status of ha-600000-m02 ...
	I0311 03:58:19.814735    3049 status.go:330] ha-600000-m02 host status = "Stopped" (err=<nil>)
	I0311 03:58:19.814739    3049 status.go:343] host is not running, skipping remaining checks
	I0311 03:58:19.814741    3049 status.go:257] ha-600000-m02 status: &{Name:ha-600000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 03:58:19.814746    3049 status.go:255] checking status of ha-600000-m03 ...
	I0311 03:58:19.815472    3049 status.go:330] ha-600000-m03 host status = "Running" (err=<nil>)
	I0311 03:58:19.815476    3049 host.go:66] Checking if "ha-600000-m03" exists ...
	I0311 03:58:19.815566    3049 host.go:66] Checking if "ha-600000-m03" exists ...
	I0311 03:58:19.815680    3049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 03:58:19.815686    3049 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m03/id_rsa Username:docker}
	W0311 03:59:34.815993    3049 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0311 03:59:34.816163    3049 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0311 03:59:34.816194    3049 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0311 03:59:34.816208    3049 status.go:257] ha-600000-m03 status: &{Name:ha-600000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 03:59:34.816237    3049 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0311 03:59:34.816249    3049 status.go:255] checking status of ha-600000-m04 ...
	I0311 03:59:34.818525    3049 status.go:330] ha-600000-m04 host status = "Running" (err=<nil>)
	I0311 03:59:34.818544    3049 host.go:66] Checking if "ha-600000-m04" exists ...
	I0311 03:59:34.819000    3049 host.go:66] Checking if "ha-600000-m04" exists ...
	I0311 03:59:34.819620    3049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 03:59:34.819648    3049 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000-m04/id_rsa Username:docker}
	W0311 04:00:49.819433    3049 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0311 04:00:49.819640    3049 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0311 04:00:49.819679    3049 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0311 04:00:49.819704    3049 status.go:257] ha-600000-m04 status: &{Name:ha-600000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0311 04:00:49.819750    3049 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 3 (25.998308041s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 04:01:15.819730    3117 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 04:01:15.819762    3117 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (209.03s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (234.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-600000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-600000 -v=7 --alsologtostderr
E0311 04:04:15.841994    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 04:05:35.755696    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-600000 -v=7 --alsologtostderr: (3m49.012596334s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-600000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-600000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226318917s)

                                                
                                                
-- stdout --
	* [ha-600000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-600000" primary control-plane node in "ha-600000" cluster
	* Restarting existing qemu2 VM for "ha-600000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-600000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:06:21.839311    3221 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:06:21.839493    3221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:06:21.839498    3221 out.go:304] Setting ErrFile to fd 2...
	I0311 04:06:21.839501    3221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:06:21.839691    3221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:06:21.842013    3221 out.go:298] Setting JSON to false
	I0311 04:06:21.863909    3221 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2153,"bootTime":1710153028,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:06:21.863983    3221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:06:21.867856    3221 out.go:177] * [ha-600000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:06:21.875950    3221 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:06:21.879870    3221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:06:21.875978    3221 notify.go:220] Checking for updates...
	I0311 04:06:21.882959    3221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:06:21.885883    3221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:06:21.888882    3221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:06:21.891887    3221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:06:21.893695    3221 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:06:21.893743    3221 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:06:21.897815    3221 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:06:21.904716    3221 start.go:297] selected driver: qemu2
	I0311 04:06:21.904722    3221 start.go:901] validating driver "qemu2" against &{Name:ha-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-600000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:06:21.904794    3221 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:06:21.907706    3221 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:06:21.907744    3221 cni.go:84] Creating CNI manager for ""
	I0311 04:06:21.907749    3221 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0311 04:06:21.907799    3221 start.go:340] cluster config:
	{Name:ha-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-600000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:06:21.912696    3221 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:06:21.920859    3221 out.go:177] * Starting "ha-600000" primary control-plane node in "ha-600000" cluster
	I0311 04:06:21.924859    3221 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:06:21.924872    3221 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:06:21.924880    3221 cache.go:56] Caching tarball of preloaded images
	I0311 04:06:21.924929    3221 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:06:21.924935    3221 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:06:21.925011    3221 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/ha-600000/config.json ...
	I0311 04:06:21.925454    3221 start.go:360] acquireMachinesLock for ha-600000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:06:21.925486    3221 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "ha-600000"
	I0311 04:06:21.925501    3221 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:06:21.925508    3221 fix.go:54] fixHost starting: 
	I0311 04:06:21.925623    3221 fix.go:112] recreateIfNeeded on ha-600000: state=Stopped err=<nil>
	W0311 04:06:21.925631    3221 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:06:21.929937    3221 out.go:177] * Restarting existing qemu2 VM for "ha-600000" ...
	I0311 04:06:21.937943    3221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:fd:0e:6e:57:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/disk.qcow2
	I0311 04:06:21.939980    3221 main.go:141] libmachine: STDOUT: 
	I0311 04:06:21.939996    3221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:06:21.940022    3221 fix.go:56] duration metric: took 14.51725ms for fixHost
	I0311 04:06:21.940026    3221 start.go:83] releasing machines lock for "ha-600000", held for 14.53675ms
	W0311 04:06:21.940033    3221 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:06:21.940072    3221 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:06:21.940077    3221 start.go:728] Will try again in 5 seconds ...
	I0311 04:06:26.942106    3221 start.go:360] acquireMachinesLock for ha-600000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:06:26.942504    3221 start.go:364] duration metric: took 243.542µs to acquireMachinesLock for "ha-600000"
	I0311 04:06:26.942655    3221 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:06:26.942682    3221 fix.go:54] fixHost starting: 
	I0311 04:06:26.943402    3221 fix.go:112] recreateIfNeeded on ha-600000: state=Stopped err=<nil>
	W0311 04:06:26.943430    3221 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:06:26.948821    3221 out.go:177] * Restarting existing qemu2 VM for "ha-600000" ...
	I0311 04:06:26.952962    3221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:fd:0e:6e:57:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/disk.qcow2
	I0311 04:06:26.962687    3221 main.go:141] libmachine: STDOUT: 
	I0311 04:06:26.962745    3221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:06:26.962862    3221 fix.go:56] duration metric: took 20.184459ms for fixHost
	I0311 04:06:26.962891    3221 start.go:83] releasing machines lock for "ha-600000", held for 20.33175ms
	W0311 04:06:26.963094    3221 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:06:26.970694    3221 out.go:177] 
	W0311 04:06:26.974858    3221 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:06:26.974903    3221 out.go:239] * 
	* 
	W0311 04:06:26.977371    3221 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:06:26.988768    3221 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-600000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-600000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 7 (34.01975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (234.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-600000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.480167ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-600000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-600000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:06:27.133550    3237 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:06:27.133774    3237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:06:27.133777    3237 out.go:304] Setting ErrFile to fd 2...
	I0311 04:06:27.133779    3237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:06:27.133908    3237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:06:27.134149    3237 mustload.go:65] Loading cluster: ha-600000
	I0311 04:06:27.134370    3237 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0311 04:06:27.134695    3237 out.go:239] ! The control-plane node ha-600000 host is not running (will try others): state=Stopped
	! The control-plane node ha-600000 host is not running (will try others): state=Stopped
	W0311 04:06:27.134807    3237 out.go:239] ! The control-plane node ha-600000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-600000-m02 host is not running (will try others): state=Stopped
	I0311 04:06:27.138496    3237 out.go:177] * The control-plane node ha-600000-m03 host is not running: state=Stopped
	I0311 04:06:27.141541    3237 out.go:177]   To start a cluster, run: "minikube start -p ha-600000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-600000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr: exit status 7 (32.492125ms)

                                                
                                                
-- stdout --
	ha-600000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:06:27.175983    3239 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:06:27.176126    3239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:06:27.176129    3239 out.go:304] Setting ErrFile to fd 2...
	I0311 04:06:27.176132    3239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:06:27.176267    3239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:06:27.176404    3239 out.go:298] Setting JSON to false
	I0311 04:06:27.176414    3239 mustload.go:65] Loading cluster: ha-600000
	I0311 04:06:27.176481    3239 notify.go:220] Checking for updates...
	I0311 04:06:27.176647    3239 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:06:27.176652    3239 status.go:255] checking status of ha-600000 ...
	I0311 04:06:27.176837    3239 status.go:330] ha-600000 host status = "Stopped" (err=<nil>)
	I0311 04:06:27.176841    3239 status.go:343] host is not running, skipping remaining checks
	I0311 04:06:27.176843    3239 status.go:257] ha-600000 status: &{Name:ha-600000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 04:06:27.176853    3239 status.go:255] checking status of ha-600000-m02 ...
	I0311 04:06:27.176945    3239 status.go:330] ha-600000-m02 host status = "Stopped" (err=<nil>)
	I0311 04:06:27.176948    3239 status.go:343] host is not running, skipping remaining checks
	I0311 04:06:27.176950    3239 status.go:257] ha-600000-m02 status: &{Name:ha-600000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 04:06:27.176954    3239 status.go:255] checking status of ha-600000-m03 ...
	I0311 04:06:27.177038    3239 status.go:330] ha-600000-m03 host status = "Stopped" (err=<nil>)
	I0311 04:06:27.177040    3239 status.go:343] host is not running, skipping remaining checks
	I0311 04:06:27.177042    3239 status.go:257] ha-600000-m03 status: &{Name:ha-600000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 04:06:27.177045    3239 status.go:255] checking status of ha-600000-m04 ...
	I0311 04:06:27.177142    3239 status.go:330] ha-600000-m04 host status = "Stopped" (err=<nil>)
	I0311 04:06:27.177145    3239 status.go:343] host is not running, skipping remaining checks
	I0311 04:06:27.177147    3239 status.go:257] ha-600000-m04 status: &{Name:ha-600000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 7 (31.881209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.15s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2.102938875s)
ha_test.go:413: expected profile "ha-600000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-600000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-600000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-600000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 7 (46.151583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.15s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (202.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 stop -v=7 --alsologtostderr
E0311 04:06:58.823733    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 04:09:15.861590    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-600000 stop -v=7 --alsologtostderr: (3m21.975762916s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr: exit status 7 (67.322167ms)

                                                
                                                
-- stdout --
	ha-600000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-600000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:09:51.423752    3318 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:09:51.423931    3318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:09:51.423935    3318 out.go:304] Setting ErrFile to fd 2...
	I0311 04:09:51.423939    3318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:09:51.424095    3318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:09:51.424259    3318 out.go:298] Setting JSON to false
	I0311 04:09:51.424279    3318 mustload.go:65] Loading cluster: ha-600000
	I0311 04:09:51.424342    3318 notify.go:220] Checking for updates...
	I0311 04:09:51.424592    3318 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:09:51.424599    3318 status.go:255] checking status of ha-600000 ...
	I0311 04:09:51.424837    3318 status.go:330] ha-600000 host status = "Stopped" (err=<nil>)
	I0311 04:09:51.424842    3318 status.go:343] host is not running, skipping remaining checks
	I0311 04:09:51.424845    3318 status.go:257] ha-600000 status: &{Name:ha-600000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 04:09:51.424857    3318 status.go:255] checking status of ha-600000-m02 ...
	I0311 04:09:51.424972    3318 status.go:330] ha-600000-m02 host status = "Stopped" (err=<nil>)
	I0311 04:09:51.424976    3318 status.go:343] host is not running, skipping remaining checks
	I0311 04:09:51.424978    3318 status.go:257] ha-600000-m02 status: &{Name:ha-600000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 04:09:51.424984    3318 status.go:255] checking status of ha-600000-m03 ...
	I0311 04:09:51.425101    3318 status.go:330] ha-600000-m03 host status = "Stopped" (err=<nil>)
	I0311 04:09:51.425104    3318 status.go:343] host is not running, skipping remaining checks
	I0311 04:09:51.425107    3318 status.go:257] ha-600000-m03 status: &{Name:ha-600000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 04:09:51.425111    3318 status.go:255] checking status of ha-600000-m04 ...
	I0311 04:09:51.425223    3318 status.go:330] ha-600000-m04 host status = "Stopped" (err=<nil>)
	I0311 04:09:51.425227    3318 status.go:343] host is not running, skipping remaining checks
	I0311 04:09:51.425230    3318 status.go:257] ha-600000-m04 status: &{Name:ha-600000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr": ha-600000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr": ha-600000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr": ha-600000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-600000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 7 (33.953667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/StopCluster (202.08s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-600000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-600000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182620125s)

                                                
                                                
-- stdout --
	* [ha-600000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-600000" primary control-plane node in "ha-600000" cluster
	* Restarting existing qemu2 VM for "ha-600000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-600000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:09:51.490128    3322 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:09:51.490260    3322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:09:51.490263    3322 out.go:304] Setting ErrFile to fd 2...
	I0311 04:09:51.490265    3322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:09:51.490378    3322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:09:51.491374    3322 out.go:298] Setting JSON to false
	I0311 04:09:51.507456    3322 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2363,"bootTime":1710153028,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:09:51.507528    3322 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:09:51.511324    3322 out.go:177] * [ha-600000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:09:51.518356    3322 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:09:51.522261    3322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:09:51.518412    3322 notify.go:220] Checking for updates...
	I0311 04:09:51.530294    3322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:09:51.533281    3322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:09:51.536321    3322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:09:51.539286    3322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:09:51.542662    3322 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:09:51.542931    3322 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:09:51.547237    3322 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:09:51.554254    3322 start.go:297] selected driver: qemu2
	I0311 04:09:51.554260    3322 start.go:901] validating driver "qemu2" against &{Name:ha-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-600000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:09:51.554332    3322 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:09:51.556630    3322 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:09:51.556676    3322 cni.go:84] Creating CNI manager for ""
	I0311 04:09:51.556680    3322 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0311 04:09:51.556721    3322 start.go:340] cluster config:
	{Name:ha-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-600000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:09:51.561145    3322 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:09:51.568186    3322 out.go:177] * Starting "ha-600000" primary control-plane node in "ha-600000" cluster
	I0311 04:09:51.572229    3322 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:09:51.572241    3322 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:09:51.572249    3322 cache.go:56] Caching tarball of preloaded images
	I0311 04:09:51.572298    3322 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:09:51.572304    3322 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:09:51.572384    3322 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/ha-600000/config.json ...
	I0311 04:09:51.572864    3322 start.go:360] acquireMachinesLock for ha-600000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:09:51.572903    3322 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "ha-600000"
	I0311 04:09:51.572912    3322 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:09:51.572918    3322 fix.go:54] fixHost starting: 
	I0311 04:09:51.573043    3322 fix.go:112] recreateIfNeeded on ha-600000: state=Stopped err=<nil>
	W0311 04:09:51.573052    3322 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:09:51.576270    3322 out.go:177] * Restarting existing qemu2 VM for "ha-600000" ...
	I0311 04:09:51.584327    3322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:fd:0e:6e:57:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/disk.qcow2
	I0311 04:09:51.586404    3322 main.go:141] libmachine: STDOUT: 
	I0311 04:09:51.586426    3322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:09:51.586477    3322 fix.go:56] duration metric: took 13.560083ms for fixHost
	I0311 04:09:51.586481    3322 start.go:83] releasing machines lock for "ha-600000", held for 13.574292ms
	W0311 04:09:51.586488    3322 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:09:51.586525    3322 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:09:51.586529    3322 start.go:728] Will try again in 5 seconds ...
	I0311 04:09:56.587743    3322 start.go:360] acquireMachinesLock for ha-600000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:09:56.588026    3322 start.go:364] duration metric: took 205.875µs to acquireMachinesLock for "ha-600000"
	I0311 04:09:56.588147    3322 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:09:56.588169    3322 fix.go:54] fixHost starting: 
	I0311 04:09:56.588820    3322 fix.go:112] recreateIfNeeded on ha-600000: state=Stopped err=<nil>
	W0311 04:09:56.588850    3322 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:09:56.594162    3322 out.go:177] * Restarting existing qemu2 VM for "ha-600000" ...
	I0311 04:09:56.600267    3322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:fd:0e:6e:57:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/ha-600000/disk.qcow2
	I0311 04:09:56.609923    3322 main.go:141] libmachine: STDOUT: 
	I0311 04:09:56.609990    3322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:09:56.610051    3322 fix.go:56] duration metric: took 21.888667ms for fixHost
	I0311 04:09:56.610067    3322 start.go:83] releasing machines lock for "ha-600000", held for 22.015167ms
	W0311 04:09:56.610252    3322 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-600000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:09:56.616406    3322 out.go:177] 
	W0311 04:09:56.620252    3322 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:09:56.620306    3322 out.go:239] * 
	* 
	W0311 04:09:56.623324    3322 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:09:56.634131    3322 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-600000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 7 (71.013291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-600000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-600000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-600000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-600000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 7 (31.537958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-600000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-600000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.923625ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-600000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-600000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:09:56.851868    3338 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:09:56.852226    3338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:09:56.852229    3338 out.go:304] Setting ErrFile to fd 2...
	I0311 04:09:56.852231    3338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:09:56.852394    3338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:09:56.852618    3338 mustload.go:65] Loading cluster: ha-600000
	I0311 04:09:56.852836    3338 config.go:182] Loaded profile config "ha-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0311 04:09:56.853147    3338 out.go:239] ! The control-plane node ha-600000 host is not running (will try others): state=Stopped
	! The control-plane node ha-600000 host is not running (will try others): state=Stopped
	W0311 04:09:56.853251    3338 out.go:239] ! The control-plane node ha-600000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-600000-m02 host is not running (will try others): state=Stopped
	I0311 04:09:56.856944    3338 out.go:177] * The control-plane node ha-600000-m03 host is not running: state=Stopped
	I0311 04:09:56.860911    3338 out.go:177]   To start a cluster, run: "minikube start -p ha-600000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-600000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-600000 -n ha-600000: exit status 7 (31.883084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-600000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-342000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-342000 --driver=qemu2 : exit status 80 (9.822337458s)

                                                
                                                
-- stdout --
	* [image-342000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-342000" primary control-plane node in "image-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-342000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-342000 -n image-342000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-342000 -n image-342000: exit status 7 (69.320375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-342000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-607000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-607000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.803186084s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"58c3d878-8dcf-4402-8a75-7e1d3853defb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-607000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e161bfd6-0108-437d-8b26-f0f71921a2fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18350"}}
	{"specversion":"1.0","id":"230da44c-a9fa-45c9-8975-cfeff646856e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig"}}
	{"specversion":"1.0","id":"8dbead31-77dd-418a-ac7c-9314254f3413","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6d8e9437-6103-4e22-89e3-93352b5f0c91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fedb2cee-4b6e-4c9c-8e5c-6ade8988bcdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube"}}
	{"specversion":"1.0","id":"916e182d-76dd-4cf3-ab4b-b4a2dd8e1e31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"91435d22-96bd-449e-ba10-aa31dbbe2ae8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e6cbc01-fc7a-4a5e-b999-a57197fe4c07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"31eafb35-410c-446f-8d54-495e24fb41a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-607000\" primary control-plane node in \"json-output-607000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d0992c6-3fc4-4b74-be02-68d1f89c42da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"1a64161a-58a7-4733-9126-1260f2ded16b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-607000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"06608651-997c-4751-b0c3-6121a57c6a12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"2151878a-f36a-4813-9d3f-19411dfa0e72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"25a679df-0611-4a54-a619-f14d30a0c8e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-607000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"c8ea1a22-29bc-4f2f-9aa4-441a06c60ac4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"db385cc1-7f5a-4984-b9fa-39038aa8a4c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-607000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-607000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-607000 --output=json --user=testUser: exit status 83 (78.207333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ef791dd5-20f8-4902-8f48-193d90da4dcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-607000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"f2addf87-4217-4423-8feb-fc5f470ea3cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-607000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-607000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-607000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-607000 --output=json --user=testUser: exit status 83 (45.014542ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-607000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-607000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-607000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-607000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-223000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-223000 --driver=qemu2 : exit status 80 (9.891510958s)

                                                
                                                
-- stdout --
	* [first-223000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-223000" primary control-plane node in "first-223000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-223000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-223000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-223000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-11 04:10:30.933029 -0700 PDT m=+2168.457991709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-225000 -n second-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-225000 -n second-225000: exit status 85 (86.059958ms)

                                                
                                                
-- stdout --
	* Profile "second-225000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-225000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-225000" host is not running, skipping log retrieval (state="* Profile \"second-225000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-225000\"")
helpers_test.go:175: Cleaning up "second-225000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-225000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-11 04:10:31.254082 -0700 PDT m=+2168.779053917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-223000 -n first-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-223000 -n first-223000: exit status 7 (31.750917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-223000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-223000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-223000
--- FAIL: TestMinikubeProfile (10.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-070000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E0311 04:10:35.775024    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-070000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.562720666s)

                                                
                                                
-- stdout --
	* [mount-start-1-070000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-070000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-070000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-070000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-070000 -n mount-start-1-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-070000 -n mount-start-1-070000: exit status 7 (69.725291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-070000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.63s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-976000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-976000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.9124565s)

                                                
                                                
-- stdout --
	* [multinode-976000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-976000" primary control-plane node in "multinode-976000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-976000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:10:42.379709    3513 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:10:42.379814    3513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:10:42.379818    3513 out.go:304] Setting ErrFile to fd 2...
	I0311 04:10:42.379820    3513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:10:42.379947    3513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:10:42.381005    3513 out.go:298] Setting JSON to false
	I0311 04:10:42.397168    3513 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2414,"bootTime":1710153028,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:10:42.397231    3513 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:10:42.403994    3513 out.go:177] * [multinode-976000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:10:42.411015    3513 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:10:42.414913    3513 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:10:42.411076    3513 notify.go:220] Checking for updates...
	I0311 04:10:42.417968    3513 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:10:42.420981    3513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:10:42.423896    3513 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:10:42.426908    3513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:10:42.430185    3513 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:10:42.433936    3513 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:10:42.440968    3513 start.go:297] selected driver: qemu2
	I0311 04:10:42.440974    3513 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:10:42.440980    3513 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:10:42.443267    3513 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:10:42.446962    3513 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:10:42.450046    3513 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:10:42.450087    3513 cni.go:84] Creating CNI manager for ""
	I0311 04:10:42.450094    3513 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0311 04:10:42.450098    3513 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 04:10:42.450139    3513 start.go:340] cluster config:
	{Name:multinode-976000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-976000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:10:42.454713    3513 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:10:42.461885    3513 out.go:177] * Starting "multinode-976000" primary control-plane node in "multinode-976000" cluster
	I0311 04:10:42.465958    3513 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:10:42.465982    3513 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:10:42.465992    3513 cache.go:56] Caching tarball of preloaded images
	I0311 04:10:42.466046    3513 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:10:42.466052    3513 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:10:42.466266    3513 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/multinode-976000/config.json ...
	I0311 04:10:42.466279    3513 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/multinode-976000/config.json: {Name:mk81a7edc974cdf1c9a4048e907b4269f9fc513c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:10:42.466491    3513 start.go:360] acquireMachinesLock for multinode-976000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:10:42.466524    3513 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "multinode-976000"
	I0311 04:10:42.466534    3513 start.go:93] Provisioning new machine with config: &{Name:multinode-976000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-976000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:10:42.466563    3513 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:10:42.474979    3513 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:10:42.492143    3513 start.go:159] libmachine.API.Create for "multinode-976000" (driver="qemu2")
	I0311 04:10:42.492173    3513 client.go:168] LocalClient.Create starting
	I0311 04:10:42.492236    3513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:10:42.492274    3513 main.go:141] libmachine: Decoding PEM data...
	I0311 04:10:42.492285    3513 main.go:141] libmachine: Parsing certificate...
	I0311 04:10:42.492326    3513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:10:42.492348    3513 main.go:141] libmachine: Decoding PEM data...
	I0311 04:10:42.492356    3513 main.go:141] libmachine: Parsing certificate...
	I0311 04:10:42.492696    3513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:10:42.634909    3513 main.go:141] libmachine: Creating SSH key...
	I0311 04:10:42.706713    3513 main.go:141] libmachine: Creating Disk image...
	I0311 04:10:42.706719    3513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:10:42.707014    3513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:10:42.719283    3513 main.go:141] libmachine: STDOUT: 
	I0311 04:10:42.719314    3513 main.go:141] libmachine: STDERR: 
	I0311 04:10:42.719362    3513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2 +20000M
	I0311 04:10:42.730224    3513 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:10:42.730243    3513 main.go:141] libmachine: STDERR: 
	I0311 04:10:42.730254    3513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:10:42.730259    3513 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:10:42.730290    3513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:2a:28:ef:d4:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:10:42.732131    3513 main.go:141] libmachine: STDOUT: 
	I0311 04:10:42.732156    3513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:10:42.732173    3513 client.go:171] duration metric: took 240.003583ms to LocalClient.Create
	I0311 04:10:44.732404    3513 start.go:128] duration metric: took 2.265889833s to createHost
	I0311 04:10:44.732476    3513 start.go:83] releasing machines lock for "multinode-976000", held for 2.266009583s
	W0311 04:10:44.732542    3513 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:10:44.738618    3513 out.go:177] * Deleting "multinode-976000" in qemu2 ...
	W0311 04:10:44.769867    3513 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:10:44.769893    3513 start.go:728] Will try again in 5 seconds ...
	I0311 04:10:49.771322    3513 start.go:360] acquireMachinesLock for multinode-976000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:10:49.771624    3513 start.go:364] duration metric: took 220.959µs to acquireMachinesLock for "multinode-976000"
	I0311 04:10:49.771708    3513 start.go:93] Provisioning new machine with config: &{Name:multinode-976000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-976000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:10:49.771977    3513 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:10:49.782535    3513 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:10:49.826545    3513 start.go:159] libmachine.API.Create for "multinode-976000" (driver="qemu2")
	I0311 04:10:49.826604    3513 client.go:168] LocalClient.Create starting
	I0311 04:10:49.826766    3513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:10:49.826834    3513 main.go:141] libmachine: Decoding PEM data...
	I0311 04:10:49.826853    3513 main.go:141] libmachine: Parsing certificate...
	I0311 04:10:49.826927    3513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:10:49.826979    3513 main.go:141] libmachine: Decoding PEM data...
	I0311 04:10:49.826997    3513 main.go:141] libmachine: Parsing certificate...
	I0311 04:10:49.827585    3513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:10:49.979975    3513 main.go:141] libmachine: Creating SSH key...
	I0311 04:10:50.187663    3513 main.go:141] libmachine: Creating Disk image...
	I0311 04:10:50.187670    3513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:10:50.187847    3513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:10:50.200788    3513 main.go:141] libmachine: STDOUT: 
	I0311 04:10:50.200810    3513 main.go:141] libmachine: STDERR: 
	I0311 04:10:50.200862    3513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2 +20000M
	I0311 04:10:50.211634    3513 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:10:50.211650    3513 main.go:141] libmachine: STDERR: 
	I0311 04:10:50.211659    3513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:10:50.211662    3513 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:10:50.211698    3513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:1b:71:bc:ac:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:10:50.213497    3513 main.go:141] libmachine: STDOUT: 
	I0311 04:10:50.213513    3513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:10:50.213527    3513 client.go:171] duration metric: took 386.930166ms to LocalClient.Create
	I0311 04:10:52.215686    3513 start.go:128] duration metric: took 2.4437415s to createHost
	I0311 04:10:52.215741    3513 start.go:83] releasing machines lock for "multinode-976000", held for 2.444171s
	W0311 04:10:52.216059    3513 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-976000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-976000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:10:52.229190    3513 out.go:177] 
	W0311 04:10:52.232330    3513 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:10:52.232380    3513 out.go:239] * 
	* 
	W0311 04:10:52.234841    3513 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:10:52.243210    3513 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-976000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (68.025666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (96.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (123.98175ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-976000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- rollout status deployment/busybox: exit status 1 (57.112541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.831792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.239584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.361708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.931958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.510209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.417333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.198625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.16425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.713083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.80125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.915167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.525458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.489125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.519042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.898542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (32.012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (96.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-976000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.054917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (31.896375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-976000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-976000 -v 3 --alsologtostderr: exit status 83 (41.541334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-976000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-976000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:28.907623    3611 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:28.907770    3611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:28.907773    3611 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:28.907775    3611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:28.907902    3611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:28.908137    3611 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:28.908326    3611 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:28.912353    3611 out.go:177] * The control-plane node multinode-976000 host is not running: state=Stopped
	I0311 04:12:28.915231    3611 out.go:177]   To start a cluster, run: "minikube start -p multinode-976000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-976000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (31.756917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-976000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-976000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (30.302084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-976000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-976000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-976000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (32.533625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-976000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-976000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-976000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-976000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (31.456917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status --output json --alsologtostderr: exit status 7 (31.935834ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-976000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:29.148225    3624 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:29.148393    3624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:29.148397    3624 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:29.148403    3624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:29.148529    3624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:29.148654    3624 out.go:298] Setting JSON to true
	I0311 04:12:29.148664    3624 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:29.148714    3624 notify.go:220] Checking for updates...
	I0311 04:12:29.148879    3624 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:29.148884    3624 status.go:255] checking status of multinode-976000 ...
	I0311 04:12:29.149086    3624 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:12:29.149090    3624 status.go:343] host is not running, skipping remaining checks
	I0311 04:12:29.149092    3624 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-976000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (32.117625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 node stop m03: exit status 85 (48.418ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-976000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status: exit status 7 (31.575333ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr: exit status 7 (30.949333ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:29.292172    3632 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:29.292317    3632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:29.292320    3632 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:29.292322    3632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:29.292466    3632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:29.292584    3632 out.go:298] Setting JSON to false
	I0311 04:12:29.292594    3632 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:29.292646    3632 notify.go:220] Checking for updates...
	I0311 04:12:29.292790    3632 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:29.292795    3632 status.go:255] checking status of multinode-976000 ...
	I0311 04:12:29.292990    3632 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:12:29.292994    3632 status.go:343] host is not running, skipping remaining checks
	I0311 04:12:29.292996    3632 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr": multinode-976000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (31.650459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (55.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.212834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:29.355979    3636 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:29.356202    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:29.356205    3636 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:29.356208    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:29.356330    3636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:29.356559    3636 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:29.356740    3636 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:29.361182    3636 out.go:177] 
	W0311 04:12:29.362415    3636 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0311 04:12:29.362420    3636 out.go:239] * 
	* 
	W0311 04:12:29.364105    3636 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:12:29.367165    3636 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0311 04:12:29.355979    3636 out.go:291] Setting OutFile to fd 1 ...
I0311 04:12:29.356202    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 04:12:29.356205    3636 out.go:304] Setting ErrFile to fd 2...
I0311 04:12:29.356208    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 04:12:29.356330    3636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
I0311 04:12:29.356559    3636 mustload.go:65] Loading cluster: multinode-976000
I0311 04:12:29.356740    3636 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 04:12:29.361182    3636 out.go:177] 
W0311 04:12:29.362415    3636 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0311 04:12:29.362420    3636 out.go:239] * 
* 
W0311 04:12:29.364105    3636 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0311 04:12:29.367165    3636 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-976000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr: exit status 7 (32.381333ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:29.403791    3638 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:29.403957    3638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:29.403960    3638 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:29.403963    3638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:29.404081    3638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:29.404202    3638 out.go:298] Setting JSON to false
	I0311 04:12:29.404212    3638 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:29.404272    3638 notify.go:220] Checking for updates...
	I0311 04:12:29.404422    3638 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:29.404428    3638 status.go:255] checking status of multinode-976000 ...
	I0311 04:12:29.404620    3638 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:12:29.404624    3638 status.go:343] host is not running, skipping remaining checks
	I0311 04:12:29.404626    3638 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr: exit status 7 (75.094ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:30.974792    3640 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:30.974996    3640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:30.975000    3640 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:30.975004    3640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:30.975201    3640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:30.975372    3640 out.go:298] Setting JSON to false
	I0311 04:12:30.975385    3640 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:30.975421    3640 notify.go:220] Checking for updates...
	I0311 04:12:30.975635    3640 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:30.975642    3640 status.go:255] checking status of multinode-976000 ...
	I0311 04:12:30.975947    3640 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:12:30.975952    3640 status.go:343] host is not running, skipping remaining checks
	I0311 04:12:30.975955    3640 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr: exit status 7 (74.744ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:32.140795    3642 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:32.140962    3642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:32.140966    3642 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:32.140969    3642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:32.141140    3642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:32.141325    3642 out.go:298] Setting JSON to false
	I0311 04:12:32.141346    3642 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:32.141385    3642 notify.go:220] Checking for updates...
	I0311 04:12:32.141610    3642 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:32.141616    3642 status.go:255] checking status of multinode-976000 ...
	I0311 04:12:32.141892    3642 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:12:32.141896    3642 status.go:343] host is not running, skipping remaining checks
	I0311 04:12:32.141899    3642 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr: exit status 7 (76.279167ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:33.947808    3645 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:33.948022    3645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:33.948026    3645 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:33.948029    3645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:33.948188    3645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:33.948352    3645 out.go:298] Setting JSON to false
	I0311 04:12:33.948365    3645 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:33.948413    3645 notify.go:220] Checking for updates...
	I0311 04:12:33.948643    3645 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:33.948650    3645 status.go:255] checking status of multinode-976000 ...
	I0311 04:12:33.948903    3645 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:12:33.948909    3645 status.go:343] host is not running, skipping remaining checks
	I0311 04:12:33.948912    3645 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr: exit status 7 (74.747958ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:38.599217    3654 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:38.599412    3654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:38.599416    3654 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:38.599419    3654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:38.599595    3654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:38.599763    3654 out.go:298] Setting JSON to false
	I0311 04:12:38.599777    3654 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:38.599816    3654 notify.go:220] Checking for updates...
	I0311 04:12:38.600028    3654 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:38.600035    3654 status.go:255] checking status of multinode-976000 ...
	I0311 04:12:38.600296    3654 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:12:38.600301    3654 status.go:343] host is not running, skipping remaining checks
	I0311 04:12:38.600304    3654 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr: exit status 7 (76.050666ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:41.990798    3656 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:41.991006    3656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:41.991011    3656 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:41.991014    3656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:41.991178    3656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:41.991331    3656 out.go:298] Setting JSON to false
	I0311 04:12:41.991344    3656 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:41.991381    3656 notify.go:220] Checking for updates...
	I0311 04:12:41.991606    3656 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:41.991612    3656 status.go:255] checking status of multinode-976000 ...
	I0311 04:12:41.991889    3656 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:12:41.991893    3656 status.go:343] host is not running, skipping remaining checks
	I0311 04:12:41.991897    3656 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr: exit status 7 (75.5175ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:12:52.570224    3664 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:12:52.570428    3664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:52.570432    3664 out.go:304] Setting ErrFile to fd 2...
	I0311 04:12:52.570435    3664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:12:52.570595    3664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:12:52.570776    3664 out.go:298] Setting JSON to false
	I0311 04:12:52.570790    3664 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:12:52.570831    3664 notify.go:220] Checking for updates...
	I0311 04:12:52.571032    3664 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:12:52.571039    3664 status.go:255] checking status of multinode-976000 ...
	I0311 04:12:52.571326    3664 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:12:52.571331    3664 status.go:343] host is not running, skipping remaining checks
	I0311 04:12:52.571334    3664 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr: exit status 7 (75.542458ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:13:06.179171    3668 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:13:06.179359    3668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:06.179364    3668 out.go:304] Setting ErrFile to fd 2...
	I0311 04:13:06.179367    3668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:06.179544    3668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:13:06.179713    3668 out.go:298] Setting JSON to false
	I0311 04:13:06.179726    3668 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:13:06.179760    3668 notify.go:220] Checking for updates...
	I0311 04:13:06.179974    3668 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:13:06.179980    3668 status.go:255] checking status of multinode-976000 ...
	I0311 04:13:06.180250    3668 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:13:06.180254    3668 status.go:343] host is not running, skipping remaining checks
	I0311 04:13:06.180257    3668 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr: exit status 7 (74.541042ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:13:24.900689    3674 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:13:24.900888    3674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:24.900892    3674 out.go:304] Setting ErrFile to fd 2...
	I0311 04:13:24.900895    3674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:24.901058    3674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:13:24.901217    3674 out.go:298] Setting JSON to false
	I0311 04:13:24.901230    3674 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:13:24.901270    3674 notify.go:220] Checking for updates...
	I0311 04:13:24.901536    3674 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:13:24.901543    3674 status.go:255] checking status of multinode-976000 ...
	I0311 04:13:24.901805    3674 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:13:24.901810    3674 status.go:343] host is not running, skipping remaining checks
	I0311 04:13:24.901812    3674 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-976000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (33.996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (55.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-976000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-976000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-976000: (3.314418333s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-976000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-976000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22898175s)

                                                
                                                
-- stdout --
	* [multinode-976000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-976000" primary control-plane node in "multinode-976000" cluster
	* Restarting existing qemu2 VM for "multinode-976000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-976000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:13:28.347409    3700 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:13:28.347919    3700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:28.347926    3700 out.go:304] Setting ErrFile to fd 2...
	I0311 04:13:28.347930    3700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:28.348199    3700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:13:28.350070    3700 out.go:298] Setting JSON to false
	I0311 04:13:28.370288    3700 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2580,"bootTime":1710153028,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:13:28.370351    3700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:13:28.375066    3700 out.go:177] * [multinode-976000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:13:28.382841    3700 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:13:28.382895    3700 notify.go:220] Checking for updates...
	I0311 04:13:28.386034    3700 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:13:28.389032    3700 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:13:28.390598    3700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:13:28.398971    3700 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:13:28.400481    3700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:13:28.404382    3700 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:13:28.404444    3700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:13:28.408994    3700 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:13:28.414972    3700 start.go:297] selected driver: qemu2
	I0311 04:13:28.414979    3700 start.go:901] validating driver "qemu2" against &{Name:multinode-976000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-976000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:13:28.415057    3700 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:13:28.417467    3700 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:13:28.417511    3700 cni.go:84] Creating CNI manager for ""
	I0311 04:13:28.417516    3700 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0311 04:13:28.417558    3700 start.go:340] cluster config:
	{Name:multinode-976000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-976000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:13:28.422465    3700 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:13:28.430983    3700 out.go:177] * Starting "multinode-976000" primary control-plane node in "multinode-976000" cluster
	I0311 04:13:28.435045    3700 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:13:28.435063    3700 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:13:28.435075    3700 cache.go:56] Caching tarball of preloaded images
	I0311 04:13:28.435140    3700 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:13:28.435148    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:13:28.435227    3700 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/multinode-976000/config.json ...
	I0311 04:13:28.435751    3700 start.go:360] acquireMachinesLock for multinode-976000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:13:28.435790    3700 start.go:364] duration metric: took 31.375µs to acquireMachinesLock for "multinode-976000"
	I0311 04:13:28.435799    3700 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:13:28.435806    3700 fix.go:54] fixHost starting: 
	I0311 04:13:28.435945    3700 fix.go:112] recreateIfNeeded on multinode-976000: state=Stopped err=<nil>
	W0311 04:13:28.435957    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:13:28.439956    3700 out.go:177] * Restarting existing qemu2 VM for "multinode-976000" ...
	I0311 04:13:28.448161    3700 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:1b:71:bc:ac:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:13:28.450691    3700 main.go:141] libmachine: STDOUT: 
	I0311 04:13:28.450718    3700 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:13:28.450754    3700 fix.go:56] duration metric: took 14.949334ms for fixHost
	I0311 04:13:28.450760    3700 start.go:83] releasing machines lock for "multinode-976000", held for 14.965459ms
	W0311 04:13:28.450778    3700 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:13:28.450843    3700 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:13:28.450849    3700 start.go:728] Will try again in 5 seconds ...
	I0311 04:13:33.452888    3700 start.go:360] acquireMachinesLock for multinode-976000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:13:33.453243    3700 start.go:364] duration metric: took 242.125µs to acquireMachinesLock for "multinode-976000"
	I0311 04:13:33.453373    3700 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:13:33.453391    3700 fix.go:54] fixHost starting: 
	I0311 04:13:33.454088    3700 fix.go:112] recreateIfNeeded on multinode-976000: state=Stopped err=<nil>
	W0311 04:13:33.454120    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:13:33.459564    3700 out.go:177] * Restarting existing qemu2 VM for "multinode-976000" ...
	I0311 04:13:33.466632    3700 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:1b:71:bc:ac:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:13:33.476396    3700 main.go:141] libmachine: STDOUT: 
	I0311 04:13:33.476476    3700 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:13:33.476559    3700 fix.go:56] duration metric: took 23.16975ms for fixHost
	I0311 04:13:33.476581    3700 start.go:83] releasing machines lock for "multinode-976000", held for 23.31ms
	W0311 04:13:33.476800    3700 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-976000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-976000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:13:33.484459    3700 out.go:177] 
	W0311 04:13:33.488550    3700 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:13:33.488592    3700 out.go:239] * 
	* 
	W0311 04:13:33.491635    3700 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:13:33.497590    3700 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-976000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-976000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (33.955084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 node delete m03: exit status 83 (41.490125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-976000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-976000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-976000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr: exit status 7 (31.616084ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:13:33.687698    3714 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:13:33.687846    3714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:33.687850    3714 out.go:304] Setting ErrFile to fd 2...
	I0311 04:13:33.687852    3714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:33.687976    3714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:13:33.688099    3714 out.go:298] Setting JSON to false
	I0311 04:13:33.688110    3714 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:13:33.688168    3714 notify.go:220] Checking for updates...
	I0311 04:13:33.688299    3714 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:13:33.688304    3714 status.go:255] checking status of multinode-976000 ...
	I0311 04:13:33.688527    3714 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:13:33.688531    3714 status.go:343] host is not running, skipping remaining checks
	I0311 04:13:33.688533    3714 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (31.686125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-976000 stop: (3.187181709s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status: exit status 7 (64.7415ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr: exit status 7 (33.81875ms)

                                                
                                                
-- stdout --
	multinode-976000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:13:37.005657    3741 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:13:37.005847    3741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:37.005850    3741 out.go:304] Setting ErrFile to fd 2...
	I0311 04:13:37.005857    3741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:37.005973    3741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:13:37.006090    3741 out.go:298] Setting JSON to false
	I0311 04:13:37.006100    3741 mustload.go:65] Loading cluster: multinode-976000
	I0311 04:13:37.006163    3741 notify.go:220] Checking for updates...
	I0311 04:13:37.006280    3741 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:13:37.006285    3741 status.go:255] checking status of multinode-976000 ...
	I0311 04:13:37.006497    3741 status.go:330] multinode-976000 host status = "Stopped" (err=<nil>)
	I0311 04:13:37.006501    3741 status.go:343] host is not running, skipping remaining checks
	I0311 04:13:37.006503    3741 status.go:257] multinode-976000 status: &{Name:multinode-976000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr": multinode-976000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-976000 status --alsologtostderr": multinode-976000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (32.284541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-976000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-976000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.197586083s)

                                                
                                                
-- stdout --
	* [multinode-976000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-976000" primary control-plane node in "multinode-976000" cluster
	* Restarting existing qemu2 VM for "multinode-976000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-976000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:13:37.069886    3745 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:13:37.070042    3745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:37.070046    3745 out.go:304] Setting ErrFile to fd 2...
	I0311 04:13:37.070048    3745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:13:37.070177    3745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:13:37.071156    3745 out.go:298] Setting JSON to false
	I0311 04:13:37.087129    3745 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2589,"bootTime":1710153028,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:13:37.087195    3745 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:13:37.092537    3745 out.go:177] * [multinode-976000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:13:37.105461    3745 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:13:37.100500    3745 notify.go:220] Checking for updates...
	I0311 04:13:37.113479    3745 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:13:37.117356    3745 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:13:37.120427    3745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:13:37.126402    3745 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:13:37.129491    3745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:13:37.132819    3745 config.go:182] Loaded profile config "multinode-976000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:13:37.133106    3745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:13:37.136448    3745 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:13:37.143456    3745 start.go:297] selected driver: qemu2
	I0311 04:13:37.143462    3745 start.go:901] validating driver "qemu2" against &{Name:multinode-976000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-976000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:13:37.143516    3745 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:13:37.145923    3745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:13:37.145978    3745 cni.go:84] Creating CNI manager for ""
	I0311 04:13:37.145983    3745 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0311 04:13:37.146034    3745 start.go:340] cluster config:
	{Name:multinode-976000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-976000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:13:37.150491    3745 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:13:37.157387    3745 out.go:177] * Starting "multinode-976000" primary control-plane node in "multinode-976000" cluster
	I0311 04:13:37.161462    3745 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:13:37.161478    3745 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:13:37.161488    3745 cache.go:56] Caching tarball of preloaded images
	I0311 04:13:37.161540    3745 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:13:37.161547    3745 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:13:37.161614    3745 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/multinode-976000/config.json ...
	I0311 04:13:37.162089    3745 start.go:360] acquireMachinesLock for multinode-976000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:13:37.162114    3745 start.go:364] duration metric: took 19.333µs to acquireMachinesLock for "multinode-976000"
	I0311 04:13:37.162122    3745 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:13:37.162127    3745 fix.go:54] fixHost starting: 
	I0311 04:13:37.162252    3745 fix.go:112] recreateIfNeeded on multinode-976000: state=Stopped err=<nil>
	W0311 04:13:37.162262    3745 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:13:37.166491    3745 out.go:177] * Restarting existing qemu2 VM for "multinode-976000" ...
	I0311 04:13:37.174449    3745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:1b:71:bc:ac:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:13:37.176506    3745 main.go:141] libmachine: STDOUT: 
	I0311 04:13:37.176527    3745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:13:37.176561    3745 fix.go:56] duration metric: took 14.434708ms for fixHost
	I0311 04:13:37.176566    3745 start.go:83] releasing machines lock for "multinode-976000", held for 14.448083ms
	W0311 04:13:37.176573    3745 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:13:37.176605    3745 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:13:37.176610    3745 start.go:728] Will try again in 5 seconds ...
	I0311 04:13:42.178563    3745 start.go:360] acquireMachinesLock for multinode-976000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:13:42.178908    3745 start.go:364] duration metric: took 273.708µs to acquireMachinesLock for "multinode-976000"
	I0311 04:13:42.179026    3745 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:13:42.179069    3745 fix.go:54] fixHost starting: 
	I0311 04:13:42.179729    3745 fix.go:112] recreateIfNeeded on multinode-976000: state=Stopped err=<nil>
	W0311 04:13:42.179757    3745 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:13:42.185191    3745 out.go:177] * Restarting existing qemu2 VM for "multinode-976000" ...
	I0311 04:13:42.193246    3745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:1b:71:bc:ac:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/multinode-976000/disk.qcow2
	I0311 04:13:42.202612    3745 main.go:141] libmachine: STDOUT: 
	I0311 04:13:42.202760    3745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:13:42.202816    3745 fix.go:56] duration metric: took 23.748584ms for fixHost
	I0311 04:13:42.202830    3745 start.go:83] releasing machines lock for "multinode-976000", held for 23.898541ms
	W0311 04:13:42.203003    3745 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-976000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-976000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:13:42.210029    3745 out.go:177] 
	W0311 04:13:42.214123    3745 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:13:42.214167    3745 out.go:239] * 
	* 
	W0311 04:13:42.217100    3745 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:13:42.224187    3745 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-976000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (72.021834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-976000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-976000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-976000-m01 --driver=qemu2 : exit status 80 (9.963472958s)

                                                
                                                
-- stdout --
	* [multinode-976000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-976000-m01" primary control-plane node in "multinode-976000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-976000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-976000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-976000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-976000-m02 --driver=qemu2 : exit status 80 (10.03440875s)

                                                
                                                
-- stdout --
	* [multinode-976000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-976000-m02" primary control-plane node in "multinode-976000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-976000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-976000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-976000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-976000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-976000: exit status 83 (80.18675ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-976000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-976000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-976000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-976000 -n multinode-976000: exit status 7 (31.922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.24s)

                                                
                                    
x
+
TestPreload (10.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-019000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-019000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.928832167s)

                                                
                                                
-- stdout --
	* [test-preload-019000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-019000" primary control-plane node in "test-preload-019000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-019000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:14:02.715658    3803 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:14:02.715781    3803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:14:02.715783    3803 out.go:304] Setting ErrFile to fd 2...
	I0311 04:14:02.715791    3803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:14:02.715928    3803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:14:02.717035    3803 out.go:298] Setting JSON to false
	I0311 04:14:02.733582    3803 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2614,"bootTime":1710153028,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:14:02.733645    3803 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:14:02.739943    3803 out.go:177] * [test-preload-019000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:14:02.747936    3803 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:14:02.748000    3803 notify.go:220] Checking for updates...
	I0311 04:14:02.755882    3803 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:14:02.758955    3803 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:14:02.761949    3803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:14:02.764890    3803 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:14:02.767938    3803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:14:02.771258    3803 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:14:02.771311    3803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:14:02.775838    3803 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:14:02.782899    3803 start.go:297] selected driver: qemu2
	I0311 04:14:02.782908    3803 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:14:02.782914    3803 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:14:02.785219    3803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:14:02.788833    3803 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:14:02.792062    3803 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:14:02.792114    3803 cni.go:84] Creating CNI manager for ""
	I0311 04:14:02.792123    3803 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:14:02.792127    3803 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:14:02.792167    3803 start.go:340] cluster config:
	{Name:test-preload-019000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-019000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:14:02.796817    3803 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:02.803908    3803 out.go:177] * Starting "test-preload-019000" primary control-plane node in "test-preload-019000" cluster
	I0311 04:14:02.807735    3803 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0311 04:14:02.807813    3803 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/test-preload-019000/config.json ...
	I0311 04:14:02.807831    3803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/test-preload-019000/config.json: {Name:mk78f044851eb3c432dd2aa1238396edcf8a61d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:14:02.807863    3803 cache.go:107] acquiring lock: {Name:mk2f4032ff1030d1bcd8a6e7b64d0f5de14c576d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:02.807885    3803 cache.go:107] acquiring lock: {Name:mk1da0f667c2a086f7cf2a9d919139e132776d20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:02.807891    3803 cache.go:107] acquiring lock: {Name:mk213d642841d1265c67bf428b0b88e28bcb3935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:02.808088    3803 cache.go:107] acquiring lock: {Name:mkabcfdf7097522a5c39aa656e0fadbb7c449467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:02.808123    3803 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0311 04:14:02.808126    3803 cache.go:107] acquiring lock: {Name:mk5742ab4e5bd58a1c6e6f51c82764e5308f09ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:02.808148    3803 cache.go:107] acquiring lock: {Name:mkb233d607fb7f436884e30e617525d382844870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:02.808126    3803 start.go:360] acquireMachinesLock for test-preload-019000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:14:02.808153    3803 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0311 04:14:02.808182    3803 cache.go:107] acquiring lock: {Name:mk4990288966bbb3151478b8481d4b34e221df34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:02.808232    3803 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:14:02.808290    3803 cache.go:107] acquiring lock: {Name:mk8002a5b009d2cc681f699d9b69f60420e8f7de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:02.808349    3803 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:14:02.808307    3803 start.go:364] duration metric: took 129.709µs to acquireMachinesLock for "test-preload-019000"
	I0311 04:14:02.808424    3803 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:14:02.808456    3803 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0311 04:14:02.808474    3803 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0311 04:14:02.808475    3803 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0311 04:14:02.808375    3803 start.go:93] Provisioning new machine with config: &{Name:test-preload-019000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-019000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:14:02.808542    3803 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:14:02.815799    3803 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:14:02.822213    3803 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0311 04:14:02.822934    3803 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:14:02.823061    3803 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0311 04:14:02.823135    3803 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:14:02.826383    3803 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:14:02.826410    3803 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0311 04:14:02.826496    3803 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0311 04:14:02.826546    3803 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0311 04:14:02.834542    3803 start.go:159] libmachine.API.Create for "test-preload-019000" (driver="qemu2")
	I0311 04:14:02.834558    3803 client.go:168] LocalClient.Create starting
	I0311 04:14:02.834687    3803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:14:02.834719    3803 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:02.834728    3803 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:02.834778    3803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:14:02.834800    3803 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:02.834808    3803 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:02.835166    3803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:14:02.978510    3803 main.go:141] libmachine: Creating SSH key...
	I0311 04:14:03.074133    3803 main.go:141] libmachine: Creating Disk image...
	I0311 04:14:03.074150    3803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:14:03.074336    3803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2
	I0311 04:14:03.086571    3803 main.go:141] libmachine: STDOUT: 
	I0311 04:14:03.086599    3803 main.go:141] libmachine: STDERR: 
	I0311 04:14:03.086685    3803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2 +20000M
	I0311 04:14:03.098829    3803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:14:03.098853    3803 main.go:141] libmachine: STDERR: 
	I0311 04:14:03.098872    3803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2
	I0311 04:14:03.098877    3803 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:14:03.098905    3803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:83:e9:db:31:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2
	I0311 04:14:03.100735    3803 main.go:141] libmachine: STDOUT: 
	I0311 04:14:03.100751    3803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:14:03.100768    3803 client.go:171] duration metric: took 266.213292ms to LocalClient.Create
	I0311 04:14:04.824991    3803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0311 04:14:04.867041    3803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0311 04:14:04.915708    3803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0311 04:14:04.949950    3803 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0311 04:14:04.950054    3803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0311 04:14:04.950357    3803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0311 04:14:04.962161    3803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0311 04:14:04.964160    3803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0311 04:14:05.100892    3803 start.go:128] duration metric: took 2.292391416s to createHost
	I0311 04:14:05.100940    3803 start.go:83] releasing machines lock for "test-preload-019000", held for 2.292637542s
	W0311 04:14:05.100981    3803 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:05.111819    3803 out.go:177] * Deleting "test-preload-019000" in qemu2 ...
	W0311 04:14:05.143373    3803 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:05.143408    3803 start.go:728] Will try again in 5 seconds ...
	I0311 04:14:05.175179    3803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0311 04:14:05.175222    3803 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.367275792s
	I0311 04:14:05.175263    3803 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0311 04:14:05.586413    3803 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0311 04:14:05.586544    3803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 04:14:06.361091    3803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0311 04:14:06.361169    3803 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.5530965s
	I0311 04:14:06.361198    3803 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0311 04:14:06.504554    3803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0311 04:14:06.504641    3803 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.696659292s
	I0311 04:14:06.504669    3803 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0311 04:14:07.257323    3803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0311 04:14:07.257375    3803 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.449630709s
	I0311 04:14:07.257401    3803 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0311 04:14:07.278050    3803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0311 04:14:07.278085    3803 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.4703625s
	I0311 04:14:07.278110    3803 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0311 04:14:08.428519    3803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0311 04:14:08.428573    3803 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.620560625s
	I0311 04:14:08.428604    3803 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0311 04:14:08.973980    3803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0311 04:14:08.974036    3803 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.166358s
	I0311 04:14:08.974087    3803 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0311 04:14:10.143430    3803 start.go:360] acquireMachinesLock for test-preload-019000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:14:10.143847    3803 start.go:364] duration metric: took 332.125µs to acquireMachinesLock for "test-preload-019000"
	I0311 04:14:10.143969    3803 start.go:93] Provisioning new machine with config: &{Name:test-preload-019000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-019000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:14:10.144182    3803 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:14:10.148836    3803 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:14:10.198801    3803 start.go:159] libmachine.API.Create for "test-preload-019000" (driver="qemu2")
	I0311 04:14:10.198855    3803 client.go:168] LocalClient.Create starting
	I0311 04:14:10.198967    3803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:14:10.199053    3803 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:10.199074    3803 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:10.199160    3803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:14:10.199203    3803 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:10.199216    3803 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:10.199769    3803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:14:10.351336    3803 main.go:141] libmachine: Creating SSH key...
	I0311 04:14:10.537917    3803 main.go:141] libmachine: Creating Disk image...
	I0311 04:14:10.537925    3803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:14:10.538134    3803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2
	I0311 04:14:10.550932    3803 main.go:141] libmachine: STDOUT: 
	I0311 04:14:10.550955    3803 main.go:141] libmachine: STDERR: 
	I0311 04:14:10.551001    3803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2 +20000M
	I0311 04:14:10.562173    3803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:14:10.562195    3803 main.go:141] libmachine: STDERR: 
	I0311 04:14:10.562210    3803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2
	I0311 04:14:10.562215    3803 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:14:10.562265    3803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:62:3a:b8:04:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/test-preload-019000/disk.qcow2
	I0311 04:14:10.564134    3803 main.go:141] libmachine: STDOUT: 
	I0311 04:14:10.564153    3803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:14:10.564169    3803 client.go:171] duration metric: took 365.321ms to LocalClient.Create
	I0311 04:14:12.564502    3803 start.go:128] duration metric: took 2.420329958s to createHost
	I0311 04:14:12.564570    3803 start.go:83] releasing machines lock for "test-preload-019000", held for 2.420770958s
	W0311 04:14:12.564866    3803 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-019000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-019000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:12.579306    3803 out.go:177] 
	W0311 04:14:12.582552    3803 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:14:12.582579    3803 out.go:239] * 
	* 
	W0311 04:14:12.585716    3803 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:14:12.597484    3803 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-019000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-11 04:14:12.616767 -0700 PDT m=+2390.148329709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-019000 -n test-preload-019000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-019000 -n test-preload-019000: exit status 7 (66.4715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-019000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-019000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-019000
--- FAIL: TestPreload (10.10s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-556000 --memory=2048 --driver=qemu2 
E0311 04:14:15.851988    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-556000 --memory=2048 --driver=qemu2 : exit status 80 (9.83730375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-556000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-556000" primary control-plane node in "scheduled-stop-556000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-556000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-556000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-556000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-556000" primary control-plane node in "scheduled-stop-556000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-556000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-556000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-11 04:14:22.625195 -0700 PDT m=+2400.157055667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-556000 -n scheduled-stop-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-556000 -n scheduled-stop-556000: exit status 7 (70.574208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-556000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-556000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-556000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (16.58s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1347107092 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-831000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-831000 --memory=2600 --driver=qemu2 : exit status 80 (9.8195765s)

                                                
                                                
-- stdout --
	* [skaffold-831000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-831000" primary control-plane node in "skaffold-831000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-831000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-831000" primary control-plane node in "skaffold-831000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-11 04:14:39.212733 -0700 PDT m=+2416.745087876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-831000 -n skaffold-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-831000 -n skaffold-831000: exit status 7 (62.31025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-831000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-831000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-831000
--- FAIL: TestSkaffold (16.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (634.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3372797166 start -p running-upgrade-745000 --memory=2200 --vm-driver=qemu2 
E0311 04:15:35.766573    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3372797166 start -p running-upgrade-745000 --memory=2200 --vm-driver=qemu2 : (1m10.2302745s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-745000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-745000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m44.80898525s)

                                                
                                                
-- stdout --
	* [running-upgrade-745000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-745000" primary control-plane node in "running-upgrade-745000" cluster
	* Updating the running qemu2 "running-upgrade-745000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:16:16.022375    4133 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:16:16.022576    4133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:16:16.022580    4133 out.go:304] Setting ErrFile to fd 2...
	I0311 04:16:16.022582    4133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:16:16.022716    4133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:16:16.023757    4133 out.go:298] Setting JSON to false
	I0311 04:16:16.041375    4133 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2748,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:16:16.041447    4133 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:16:16.045901    4133 out.go:177] * [running-upgrade-745000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:16:16.055855    4133 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:16:16.053001    4133 notify.go:220] Checking for updates...
	I0311 04:16:16.061740    4133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:16:16.069894    4133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:16:16.072836    4133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:16:16.076823    4133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:16:16.079884    4133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:16:16.083169    4133 config.go:182] Loaded profile config "running-upgrade-745000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:16:16.086849    4133 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 04:16:16.089887    4133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:16:16.092895    4133 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:16:16.099910    4133 start.go:297] selected driver: qemu2
	I0311 04:16:16.099918    4133 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50305 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:16:16.099988    4133 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:16:16.102457    4133 cni.go:84] Creating CNI manager for ""
	I0311 04:16:16.102476    4133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:16:16.102501    4133 start.go:340] cluster config:
	{Name:running-upgrade-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50305 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:16:16.102553    4133 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:16:16.109910    4133 out.go:177] * Starting "running-upgrade-745000" primary control-plane node in "running-upgrade-745000" cluster
	I0311 04:16:16.113910    4133 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 04:16:16.113926    4133 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0311 04:16:16.113937    4133 cache.go:56] Caching tarball of preloaded images
	I0311 04:16:16.113991    4133 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:16:16.114003    4133 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0311 04:16:16.114055    4133 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/config.json ...
	I0311 04:16:16.114537    4133 start.go:360] acquireMachinesLock for running-upgrade-745000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:16:16.114563    4133 start.go:364] duration metric: took 21µs to acquireMachinesLock for "running-upgrade-745000"
	I0311 04:16:16.114571    4133 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:16:16.114577    4133 fix.go:54] fixHost starting: 
	I0311 04:16:16.115286    4133 fix.go:112] recreateIfNeeded on running-upgrade-745000: state=Running err=<nil>
	W0311 04:16:16.115295    4133 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:16:16.119851    4133 out.go:177] * Updating the running qemu2 "running-upgrade-745000" VM ...
	I0311 04:16:16.129801    4133 machine.go:94] provisionDockerMachine start ...
	I0311 04:16:16.129860    4133 main.go:141] libmachine: Using SSH client type: native
	I0311 04:16:16.129991    4133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d61a90] 0x104d642f0 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0311 04:16:16.129997    4133 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 04:16:16.183649    4133 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-745000
	
	I0311 04:16:16.183667    4133 buildroot.go:166] provisioning hostname "running-upgrade-745000"
	I0311 04:16:16.183711    4133 main.go:141] libmachine: Using SSH client type: native
	I0311 04:16:16.183822    4133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d61a90] 0x104d642f0 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0311 04:16:16.183829    4133 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-745000 && echo "running-upgrade-745000" | sudo tee /etc/hostname
	I0311 04:16:16.238839    4133 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-745000
	
	I0311 04:16:16.238892    4133 main.go:141] libmachine: Using SSH client type: native
	I0311 04:16:16.238986    4133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d61a90] 0x104d642f0 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0311 04:16:16.238994    4133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-745000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-745000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-745000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 04:16:16.288508    4133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 04:16:16.288520    4133 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18350-986/.minikube CaCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18350-986/.minikube}
	I0311 04:16:16.288527    4133 buildroot.go:174] setting up certificates
	I0311 04:16:16.288532    4133 provision.go:84] configureAuth start
	I0311 04:16:16.288538    4133 provision.go:143] copyHostCerts
	I0311 04:16:16.288591    4133 exec_runner.go:144] found /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem, removing ...
	I0311 04:16:16.288596    4133 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem
	I0311 04:16:16.289390    4133 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem (1082 bytes)
	I0311 04:16:16.289537    4133 exec_runner.go:144] found /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem, removing ...
	I0311 04:16:16.289545    4133 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem
	I0311 04:16:16.289596    4133 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem (1123 bytes)
	I0311 04:16:16.289697    4133 exec_runner.go:144] found /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem, removing ...
	I0311 04:16:16.289700    4133 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem
	I0311 04:16:16.289736    4133 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem (1675 bytes)
	I0311 04:16:16.289815    4133 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-745000 san=[127.0.0.1 localhost minikube running-upgrade-745000]
	I0311 04:16:16.398579    4133 provision.go:177] copyRemoteCerts
	I0311 04:16:16.398616    4133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 04:16:16.398625    4133 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/running-upgrade-745000/id_rsa Username:docker}
	I0311 04:16:16.425882    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 04:16:16.432472    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 04:16:16.438959    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 04:16:16.445672    4133 provision.go:87] duration metric: took 157.139875ms to configureAuth
	I0311 04:16:16.445681    4133 buildroot.go:189] setting minikube options for container-runtime
	I0311 04:16:16.445798    4133 config.go:182] Loaded profile config "running-upgrade-745000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:16:16.445829    4133 main.go:141] libmachine: Using SSH client type: native
	I0311 04:16:16.445916    4133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d61a90] 0x104d642f0 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0311 04:16:16.445920    4133 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0311 04:16:16.496278    4133 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0311 04:16:16.496287    4133 buildroot.go:70] root file system type: tmpfs
	I0311 04:16:16.496338    4133 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0311 04:16:16.496384    4133 main.go:141] libmachine: Using SSH client type: native
	I0311 04:16:16.496490    4133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d61a90] 0x104d642f0 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0311 04:16:16.496522    4133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0311 04:16:16.550540    4133 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0311 04:16:16.550587    4133 main.go:141] libmachine: Using SSH client type: native
	I0311 04:16:16.550691    4133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d61a90] 0x104d642f0 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0311 04:16:16.550701    4133 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0311 04:16:16.601764    4133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 04:16:16.601775    4133 machine.go:97] duration metric: took 471.979708ms to provisionDockerMachine
	I0311 04:16:16.601780    4133 start.go:293] postStartSetup for "running-upgrade-745000" (driver="qemu2")
	I0311 04:16:16.601787    4133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 04:16:16.601838    4133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 04:16:16.601847    4133 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/running-upgrade-745000/id_rsa Username:docker}
	I0311 04:16:16.631459    4133 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 04:16:16.632777    4133 info.go:137] Remote host: Buildroot 2021.02.12
	I0311 04:16:16.632785    4133 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18350-986/.minikube/addons for local assets ...
	I0311 04:16:16.632852    4133 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18350-986/.minikube/files for local assets ...
	I0311 04:16:16.632939    4133 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem -> 14342.pem in /etc/ssl/certs
	I0311 04:16:16.633021    4133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 04:16:16.635572    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem --> /etc/ssl/certs/14342.pem (1708 bytes)
	I0311 04:16:16.642586    4133 start.go:296] duration metric: took 40.802125ms for postStartSetup
	I0311 04:16:16.642600    4133 fix.go:56] duration metric: took 528.041ms for fixHost
	I0311 04:16:16.642636    4133 main.go:141] libmachine: Using SSH client type: native
	I0311 04:16:16.642757    4133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d61a90] 0x104d642f0 <nil>  [] 0s} localhost 50273 <nil> <nil>}
	I0311 04:16:16.642762    4133 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0311 04:16:16.695038    4133 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710155776.865778182
	
	I0311 04:16:16.695046    4133 fix.go:216] guest clock: 1710155776.865778182
	I0311 04:16:16.695050    4133 fix.go:229] Guest: 2024-03-11 04:16:16.865778182 -0700 PDT Remote: 2024-03-11 04:16:16.642602 -0700 PDT m=+0.642617460 (delta=223.176182ms)
	I0311 04:16:16.695061    4133 fix.go:200] guest clock delta is within tolerance: 223.176182ms
	I0311 04:16:16.695063    4133 start.go:83] releasing machines lock for "running-upgrade-745000", held for 580.513667ms
	I0311 04:16:16.695120    4133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 04:16:16.695139    4133 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/running-upgrade-745000/id_rsa Username:docker}
	I0311 04:16:16.695122    4133 ssh_runner.go:195] Run: cat /version.json
	I0311 04:16:16.695178    4133 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/running-upgrade-745000/id_rsa Username:docker}
	W0311 04:16:16.695709    4133 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50273: connect: connection refused
	I0311 04:16:16.695729    4133 retry.go:31] will retry after 265.044897ms: dial tcp [::1]:50273: connect: connection refused
	W0311 04:16:16.990480    4133 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0311 04:16:16.990566    4133 ssh_runner.go:195] Run: systemctl --version
	I0311 04:16:16.994588    4133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 04:16:16.996405    4133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 04:16:16.996440    4133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0311 04:16:16.999118    4133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0311 04:16:17.003957    4133 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 04:16:17.003970    4133 start.go:494] detecting cgroup driver to use...
	I0311 04:16:17.004043    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 04:16:17.010739    4133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0311 04:16:17.015754    4133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 04:16:17.019985    4133 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 04:16:17.020052    4133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 04:16:17.022966    4133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 04:16:17.025847    4133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 04:16:17.028656    4133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 04:16:17.031493    4133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 04:16:17.034495    4133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 04:16:17.037465    4133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 04:16:17.039936    4133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 04:16:17.042428    4133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:16:17.143405    4133 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 04:16:17.152555    4133 start.go:494] detecting cgroup driver to use...
	I0311 04:16:17.152623    4133 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0311 04:16:17.158170    4133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 04:16:17.163333    4133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 04:16:17.172318    4133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 04:16:17.177013    4133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 04:16:17.181620    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 04:16:17.186878    4133 ssh_runner.go:195] Run: which cri-dockerd
	I0311 04:16:17.188195    4133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0311 04:16:17.190758    4133 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0311 04:16:17.195798    4133 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0311 04:16:17.296449    4133 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0311 04:16:17.424929    4133 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0311 04:16:17.424999    4133 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0311 04:16:17.429914    4133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:16:17.508805    4133 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 04:16:38.980928    4133 ssh_runner.go:235] Completed: sudo systemctl restart docker: (21.4727435s)
	I0311 04:16:38.980994    4133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0311 04:16:38.985308    4133 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0311 04:16:38.991503    4133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 04:16:38.995952    4133 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0311 04:16:39.081082    4133 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0311 04:16:39.175453    4133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:16:39.238670    4133 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0311 04:16:39.244477    4133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 04:16:39.249692    4133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:16:39.315046    4133 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0311 04:16:39.359040    4133 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0311 04:16:39.359110    4133 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0311 04:16:39.361627    4133 start.go:562] Will wait 60s for crictl version
	I0311 04:16:39.361675    4133 ssh_runner.go:195] Run: which crictl
	I0311 04:16:39.363219    4133 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 04:16:39.376117    4133 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0311 04:16:39.376187    4133 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 04:16:39.391829    4133 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 04:16:39.412178    4133 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0311 04:16:39.412245    4133 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0311 04:16:39.413777    4133 kubeadm.go:877] updating cluster {Name:running-upgrade-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50305 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0311 04:16:39.413820    4133 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 04:16:39.413860    4133 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 04:16:39.424642    4133 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 04:16:39.424650    4133 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 04:16:39.424699    4133 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 04:16:39.427839    4133 ssh_runner.go:195] Run: which lz4
	I0311 04:16:39.429032    4133 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0311 04:16:39.430234    4133 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 04:16:39.430246    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0311 04:16:40.178944    4133 docker.go:649] duration metric: took 749.961084ms to copy over tarball
	I0311 04:16:40.178997    4133 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 04:16:41.288304    4133 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.109324417s)
	I0311 04:16:41.288320    4133 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 04:16:41.303817    4133 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 04:16:41.306572    4133 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0311 04:16:41.311544    4133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:16:41.376683    4133 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 04:16:42.911019    4133 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.534365375s)
	I0311 04:16:42.911124    4133 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 04:16:42.922040    4133 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 04:16:42.922049    4133 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 04:16:42.922053    4133 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 04:16:42.928494    4133 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:16:42.928494    4133 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:16:42.928523    4133 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:16:42.928546    4133 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:16:42.928734    4133 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0311 04:16:42.929324    4133 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:16:42.929339    4133 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:16:42.929393    4133 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:16:42.938417    4133 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:16:42.938509    4133 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:16:42.939072    4133 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0311 04:16:42.939285    4133 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:16:42.939306    4133 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:16:42.940129    4133 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:16:42.940230    4133 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:16:42.940257    4133 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:16:44.880747    4133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:16:44.918863    4133 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0311 04:16:44.918914    4133 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:16:44.919004    4133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:16:44.939880    4133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0311 04:16:44.942231    4133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:16:44.958106    4133 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0311 04:16:44.958129    4133 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:16:44.958181    4133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:16:44.966930    4133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0311 04:16:44.970845    4133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0311 04:16:44.982468    4133 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0311 04:16:44.982489    4133 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0311 04:16:44.982537    4133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0311 04:16:44.991994    4133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0311 04:16:44.992093    4133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0311 04:16:44.993758    4133 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0311 04:16:44.993768    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0311 04:16:44.998578    4133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0311 04:16:45.002026    4133 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0311 04:16:45.002037    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0311 04:16:45.005932    4133 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0311 04:16:45.006069    4133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:16:45.012885    4133 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0311 04:16:45.012910    4133 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:16:45.012955    4133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0311 04:16:45.013498    4133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:16:45.046396    4133 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0311 04:16:45.046444    4133 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0311 04:16:45.046455    4133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0311 04:16:45.046463    4133 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:16:45.046510    4133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:16:45.046523    4133 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0311 04:16:45.046537    4133 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:16:45.046568    4133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:16:45.053889    4133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:16:45.057921    4133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0311 04:16:45.058030    4133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0311 04:16:45.058122    4133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0311 04:16:45.069062    4133 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0311 04:16:45.069081    4133 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0311 04:16:45.069085    4133 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:16:45.069105    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0311 04:16:45.069134    4133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:16:45.087406    4133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0311 04:16:45.111547    4133 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0311 04:16:45.111560    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0311 04:16:45.148991    4133 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0311 04:16:45.512866    4133 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0311 04:16:45.513417    4133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:16:45.548852    4133 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0311 04:16:45.548896    4133 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:16:45.548999    4133 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:16:46.692048    4133 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.1430455s)
	I0311 04:16:46.692087    4133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 04:16:46.692529    4133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0311 04:16:46.698162    4133 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0311 04:16:46.698241    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0311 04:16:46.753084    4133 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 04:16:46.753099    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0311 04:16:46.997208    4133 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 04:16:46.997252    4133 cache_images.go:92] duration metric: took 4.075313209s to LoadCachedImages
	W0311 04:16:46.997285    4133 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0311 04:16:46.997293    4133 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0311 04:16:46.997354    4133 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-745000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 04:16:46.997412    4133 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0311 04:16:47.010791    4133 cni.go:84] Creating CNI manager for ""
	I0311 04:16:47.010801    4133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:16:47.010806    4133 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 04:16:47.010813    4133 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-745000 NodeName:running-upgrade-745000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 04:16:47.010877    4133 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-745000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 04:16:47.010934    4133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0311 04:16:47.013908    4133 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 04:16:47.013936    4133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 04:16:47.017027    4133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0311 04:16:47.022213    4133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 04:16:47.027384    4133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0311 04:16:47.033175    4133 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0311 04:16:47.034552    4133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:16:47.099336    4133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 04:16:47.104672    4133 certs.go:68] Setting up /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000 for IP: 10.0.2.15
	I0311 04:16:47.104680    4133 certs.go:194] generating shared ca certs ...
	I0311 04:16:47.104688    4133 certs.go:226] acquiring lock for ca certs: {Name:mk0eff4ed47e91bcbb09c749a04fbf8f2901eda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:16:47.104849    4133 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key
	I0311 04:16:47.104895    4133 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key
	I0311 04:16:47.104901    4133 certs.go:256] generating profile certs ...
	I0311 04:16:47.104970    4133 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.key
	I0311 04:16:47.104985    4133 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key.cd4b92a9
	I0311 04:16:47.104999    4133 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt.cd4b92a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0311 04:16:47.190275    4133 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt.cd4b92a9 ...
	I0311 04:16:47.190291    4133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt.cd4b92a9: {Name:mk8bd2020a245cdde288e261a892ec5c133a8401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:16:47.190632    4133 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key.cd4b92a9 ...
	I0311 04:16:47.190638    4133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key.cd4b92a9: {Name:mk16202969482b6cca2ef030f4bb0253f9f004b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:16:47.190768    4133 certs.go:381] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt.cd4b92a9 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt
	I0311 04:16:47.190903    4133 certs.go:385] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key.cd4b92a9 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key
	I0311 04:16:47.191070    4133 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/proxy-client.key
	I0311 04:16:47.191196    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434.pem (1338 bytes)
	W0311 04:16:47.191229    4133 certs.go:480] ignoring /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434_empty.pem, impossibly tiny 0 bytes
	I0311 04:16:47.191234    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 04:16:47.191262    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem (1082 bytes)
	I0311 04:16:47.191286    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem (1123 bytes)
	I0311 04:16:47.191310    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem (1675 bytes)
	I0311 04:16:47.191363    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem (1708 bytes)
	I0311 04:16:47.191686    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 04:16:47.200226    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 04:16:47.206912    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 04:16:47.214243    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 04:16:47.221840    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 04:16:47.229323    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 04:16:47.236691    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 04:16:47.244026    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 04:16:47.251147    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 04:16:47.258133    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434.pem --> /usr/share/ca-certificates/1434.pem (1338 bytes)
	I0311 04:16:47.265486    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem --> /usr/share/ca-certificates/14342.pem (1708 bytes)
	I0311 04:16:47.272641    4133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 04:16:47.277517    4133 ssh_runner.go:195] Run: openssl version
	I0311 04:16:47.279242    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 04:16:47.282442    4133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:16:47.283922    4133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:16:47.283941    4133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:16:47.285627    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 04:16:47.288570    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434.pem && ln -fs /usr/share/ca-certificates/1434.pem /etc/ssl/certs/1434.pem"
	I0311 04:16:47.291494    4133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434.pem
	I0311 04:16:47.292870    4133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 10:43 /usr/share/ca-certificates/1434.pem
	I0311 04:16:47.292893    4133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434.pem
	I0311 04:16:47.294777    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1434.pem /etc/ssl/certs/51391683.0"
	I0311 04:16:47.297935    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14342.pem && ln -fs /usr/share/ca-certificates/14342.pem /etc/ssl/certs/14342.pem"
	I0311 04:16:47.301143    4133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14342.pem
	I0311 04:16:47.302670    4133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 10:43 /usr/share/ca-certificates/14342.pem
	I0311 04:16:47.302693    4133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14342.pem
	I0311 04:16:47.304645    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14342.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 04:16:47.307295    4133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 04:16:47.308749    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 04:16:47.310444    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 04:16:47.312603    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 04:16:47.314473    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 04:16:47.316631    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 04:16:47.318325    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 04:16:47.320128    4133 kubeadm.go:391] StartCluster: {Name:running-upgrade-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50305 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:16:47.320212    4133 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 04:16:47.330551    4133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 04:16:47.334383    4133 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 04:16:47.334391    4133 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 04:16:47.334394    4133 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 04:16:47.334419    4133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 04:16:47.337689    4133 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.337912    4133 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-745000" does not appear in /Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:16:47.337971    4133 kubeconfig.go:62] /Users/jenkins/minikube-integration/18350-986/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-745000" cluster setting kubeconfig missing "running-upgrade-745000" context setting]
	I0311 04:16:47.338117    4133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/kubeconfig: {Name:mka1b8ea0dffa0092f34d56c8689bdb2eb2631e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:16:47.338786    4133 kapi.go:59] client config for running-upgrade-745000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.key", CAFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10604ffd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 04:16:47.339101    4133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 04:16:47.342207    4133 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-745000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0311 04:16:47.342213    4133 kubeadm.go:1153] stopping kube-system containers ...
	I0311 04:16:47.342253    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 04:16:47.352840    4133 docker.go:483] Stopping containers: [cd570c5dd104 3947628dca50 3b51857cc2b5 8c896a6db6a9 42c9c863cbbd 4d4cb543edc7 a033847a9c75 51ea7e87d708 479f672812ba 735cbefdffdd 7330a9adce6b 9d632451aa93]
	I0311 04:16:47.352906    4133 ssh_runner.go:195] Run: docker stop cd570c5dd104 3947628dca50 3b51857cc2b5 8c896a6db6a9 42c9c863cbbd 4d4cb543edc7 a033847a9c75 51ea7e87d708 479f672812ba 735cbefdffdd 7330a9adce6b 9d632451aa93
	I0311 04:16:47.363611    4133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 04:16:47.450915    4133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 04:16:47.454713    4133 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar 11 11:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 11 11:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 11 11:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar 11 11:16 /etc/kubernetes/scheduler.conf
	
	I0311 04:16:47.454744    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf
	I0311 04:16:47.457928    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.457961    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 04:16:47.461191    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf
	I0311 04:16:47.464129    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.464160    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 04:16:47.466823    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf
	I0311 04:16:47.469743    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.469763    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 04:16:47.472574    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf
	I0311 04:16:47.474990    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.475018    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 04:16:47.477886    4133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 04:16:47.480828    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:47.501780    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:48.090177    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:48.294490    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:48.315513    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:48.334969    4133 api_server.go:52] waiting for apiserver process to appear ...
	I0311 04:16:48.335047    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:16:48.837090    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:16:49.337080    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:16:49.837068    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:16:49.841998    4133 api_server.go:72] duration metric: took 1.507076625s to wait for apiserver process to appear ...
	I0311 04:16:49.842007    4133 api_server.go:88] waiting for apiserver healthz status ...
	I0311 04:16:49.842015    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:16:54.844016    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:16:54.844053    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:16:59.844257    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:16:59.844279    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:04.844518    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:04.844540    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:09.845003    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:09.845072    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:14.846063    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:14.846158    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:19.846914    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:19.846971    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:24.848261    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:24.848359    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:29.850154    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:29.850197    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:34.852147    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:34.852184    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:39.852353    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:39.852393    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:44.854514    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:44.854570    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:49.856807    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:49.856973    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:17:49.872142    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:17:49.872221    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:17:49.884500    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:17:49.884571    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:17:49.895827    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:17:49.895899    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:17:49.915948    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:17:49.916010    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:17:49.926014    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:17:49.926073    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:17:49.936615    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:17:49.936688    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:17:49.950808    4133 logs.go:276] 0 containers: []
	W0311 04:17:49.950818    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:17:49.950871    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:17:49.961144    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:17:49.961166    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:17:49.961172    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:17:49.975910    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:17:49.975922    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:17:49.993905    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:17:49.993920    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:17:50.005541    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:17:50.005554    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:17:50.017450    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:17:50.017463    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:17:50.022032    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:17:50.022041    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:17:50.059031    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:17:50.059042    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:17:50.074226    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:17:50.074240    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:17:50.115423    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:17:50.115434    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:17:50.126919    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:17:50.126933    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:17:50.138404    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:17:50.138413    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:17:50.163370    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:17:50.163377    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:17:50.242297    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:17:50.242311    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:17:50.256881    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:17:50.256895    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:17:50.268228    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:17:50.268239    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:17:50.284358    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:17:50.284373    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:17:50.300516    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:17:50.300526    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:17:52.818325    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:57.820655    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:57.821130    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:17:57.863279    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:17:57.863417    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:17:57.900792    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:17:57.900873    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:17:57.920874    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:17:57.920936    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:17:57.931317    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:17:57.931388    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:17:57.941673    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:17:57.941749    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:17:57.952447    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:17:57.952523    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:17:57.962469    4133 logs.go:276] 0 containers: []
	W0311 04:17:57.962479    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:17:57.962532    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:17:57.973131    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:17:57.973148    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:17:57.973153    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:17:57.978062    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:17:57.978071    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:17:57.995499    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:17:57.995516    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:17:58.010108    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:17:58.010119    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:17:58.027698    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:17:58.027708    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:17:58.039171    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:17:58.039181    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:17:58.051433    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:17:58.051444    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:17:58.062786    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:17:58.062797    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:17:58.099414    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:17:58.099423    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:17:58.137733    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:17:58.137743    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:17:58.176580    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:17:58.176591    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:17:58.203088    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:17:58.203095    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:17:58.214586    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:17:58.214598    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:17:58.238491    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:17:58.238503    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:17:58.251021    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:17:58.251032    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:17:58.265601    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:17:58.265615    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:17:58.280669    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:17:58.280681    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:00.794042    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:05.796601    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:05.797015    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:05.835606    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:05.835745    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:05.858619    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:05.858740    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:05.873775    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:05.873849    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:05.888545    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:05.888628    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:05.901155    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:05.901253    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:05.913089    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:05.913164    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:05.923659    4133 logs.go:276] 0 containers: []
	W0311 04:18:05.923671    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:05.923734    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:05.940129    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:05.940148    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:05.940154    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:05.979380    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:05.979391    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:05.984365    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:05.984373    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:05.996104    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:05.996116    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:06.009343    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:06.009358    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:06.024628    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:06.024864    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:06.040090    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:06.040103    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:06.052950    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:06.052959    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:06.069934    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:06.069945    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:06.084405    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:06.084419    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:06.109827    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:06.109834    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:06.150739    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:06.150751    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:06.165385    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:06.165396    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:06.203901    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:06.203910    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:06.218807    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:06.218818    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:06.230247    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:06.230259    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:06.247476    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:06.247489    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:08.759495    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:13.762164    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:13.762568    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:13.796698    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:13.796829    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:13.816183    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:13.816294    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:13.830032    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:13.830114    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:13.841991    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:13.842068    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:13.852455    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:13.852528    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:13.863065    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:13.863131    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:13.873744    4133 logs.go:276] 0 containers: []
	W0311 04:18:13.873758    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:13.873818    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:13.884890    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:13.884908    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:13.884913    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:13.905517    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:13.905528    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:13.926675    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:13.926687    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:13.941151    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:13.941163    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:13.980251    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:13.980260    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:14.017136    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:14.017147    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:14.031389    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:14.031401    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:14.043111    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:14.043122    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:14.056850    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:14.056860    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:14.061192    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:14.061198    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:14.101101    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:14.101111    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:14.115289    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:14.115299    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:14.136748    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:14.136761    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:14.148895    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:14.148904    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:14.175001    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:14.175009    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:14.188709    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:14.188719    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:14.200592    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:14.200603    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:16.714551    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:21.716546    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:21.716696    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:21.728109    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:21.728186    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:21.738534    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:21.738601    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:21.749588    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:21.749659    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:21.760404    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:21.760475    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:21.770674    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:21.770741    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:21.784269    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:21.784345    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:21.794594    4133 logs.go:276] 0 containers: []
	W0311 04:18:21.794606    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:21.794662    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:21.805187    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:21.805203    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:21.805209    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:21.843491    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:21.843506    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:21.855476    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:21.855489    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:21.867170    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:21.867184    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:21.882419    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:21.882433    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:21.894328    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:21.894339    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:21.905674    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:21.905689    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:21.910331    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:21.910339    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:21.947308    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:21.947321    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:21.961305    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:21.961320    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:21.979223    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:21.979234    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:21.991433    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:21.991444    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:22.005863    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:22.005874    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:22.022726    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:22.022737    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:22.047546    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:22.047559    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:22.092261    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:22.092273    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:22.106121    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:22.106132    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:24.622833    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:29.625090    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:29.625336    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:29.645954    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:29.646063    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:29.661452    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:29.661543    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:29.674232    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:29.674307    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:29.685428    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:29.685495    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:29.696362    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:29.696430    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:29.707443    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:29.707504    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:29.718328    4133 logs.go:276] 0 containers: []
	W0311 04:18:29.718342    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:29.718408    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:29.730353    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:29.730369    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:29.730375    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:29.767164    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:29.767178    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:29.788747    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:29.788759    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:29.802948    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:29.802961    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:29.817212    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:29.817227    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:29.835141    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:29.835151    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:29.847103    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:29.847114    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:29.872782    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:29.872790    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:29.885502    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:29.885514    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:29.923082    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:29.923094    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:29.941754    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:29.941765    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:29.961694    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:29.961705    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:29.977016    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:29.977024    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:29.996259    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:29.996270    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:30.000862    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:30.000868    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:30.039697    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:30.039713    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:30.052360    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:30.052372    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:32.577553    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:37.579892    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:37.580039    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:37.594827    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:37.594913    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:37.607001    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:37.607072    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:37.617347    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:37.617422    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:37.628210    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:37.628279    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:37.638818    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:37.638878    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:37.649962    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:37.650034    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:37.660515    4133 logs.go:276] 0 containers: []
	W0311 04:18:37.660527    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:37.660585    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:37.670767    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:37.670786    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:37.670792    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:37.675055    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:37.675061    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:37.716533    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:37.716544    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:37.733408    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:37.733419    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:37.748157    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:37.748168    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:37.774682    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:37.774692    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:37.787149    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:37.787160    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:37.825485    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:37.825497    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:37.861924    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:37.861956    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:37.876418    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:37.876431    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:37.888264    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:37.888276    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:37.900405    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:37.900417    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:37.915084    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:37.915094    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:37.931596    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:37.931613    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:37.950045    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:37.950058    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:37.965512    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:37.965524    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:37.983941    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:37.983952    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:40.498159    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:45.500360    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:45.500567    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:45.520193    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:45.520262    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:45.532298    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:45.532359    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:45.544740    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:45.544816    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:45.557700    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:45.557772    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:45.570293    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:45.570357    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:45.583563    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:45.583635    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:45.595672    4133 logs.go:276] 0 containers: []
	W0311 04:18:45.595683    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:45.595736    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:45.611227    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:45.611245    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:45.611249    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:45.629382    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:45.629396    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:45.643381    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:45.643393    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:45.656692    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:45.656705    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:45.680117    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:45.680133    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:45.696184    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:45.696196    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:45.709834    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:45.709851    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:45.747541    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:45.747554    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:45.763256    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:45.763268    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:45.780332    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:45.780345    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:45.793846    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:45.793857    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:45.808886    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:45.808898    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:45.848724    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:45.848737    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:45.860914    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:45.860928    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:45.873658    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:45.873670    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:45.898053    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:45.898066    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:45.934119    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:45.934130    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:48.440345    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:53.441219    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:53.441340    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:53.453672    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:53.453749    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:53.469641    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:53.469714    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:53.480401    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:53.480461    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:53.491868    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:53.491936    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:53.503708    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:53.503781    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:53.514925    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:53.514990    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:53.525642    4133 logs.go:276] 0 containers: []
	W0311 04:18:53.525653    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:53.525706    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:53.536689    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:53.536707    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:53.536713    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:53.541494    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:53.541504    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:53.558062    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:53.558076    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:53.571603    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:53.571615    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:53.583914    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:53.583928    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:53.596990    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:53.597003    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:53.620994    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:53.621004    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:53.633222    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:53.633237    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:53.647836    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:53.647851    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:53.662348    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:53.662358    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:53.685755    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:53.685769    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:53.701876    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:53.701886    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:53.738106    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:53.738116    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:53.778693    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:53.778708    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:53.794506    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:53.794520    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:53.807437    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:53.807450    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:53.822699    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:53.822710    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:56.361645    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:01.363946    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:01.364273    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:01.406764    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:01.406895    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:01.446873    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:01.446954    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:01.459414    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:01.459490    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:01.471058    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:01.471132    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:01.482770    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:01.482842    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:01.493753    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:01.493818    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:01.505226    4133 logs.go:276] 0 containers: []
	W0311 04:19:01.505238    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:01.505300    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:01.516608    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:01.516628    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:01.516634    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:01.529001    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:01.529012    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:01.553903    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:01.553922    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:01.568785    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:01.568796    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:01.581645    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:01.581659    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:01.595862    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:01.595873    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:01.613202    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:01.613213    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:01.628462    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:01.628472    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:01.633300    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:01.633309    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:01.671389    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:01.671403    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:01.709716    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:01.709728    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:01.724415    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:01.724427    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:01.736949    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:01.736960    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:01.749252    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:01.749261    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:01.763151    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:01.763164    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:01.802242    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:01.802255    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:01.817258    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:01.817268    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:04.335027    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:09.337619    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:09.338014    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:09.378186    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:09.378324    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:09.399836    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:09.399943    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:09.415175    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:09.415250    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:09.427387    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:09.427466    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:09.438216    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:09.438288    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:09.448767    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:09.448836    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:09.469759    4133 logs.go:276] 0 containers: []
	W0311 04:19:09.469774    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:09.469830    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:09.481209    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:09.481229    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:09.481235    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:09.493029    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:09.493040    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:09.531668    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:09.531678    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:09.536032    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:09.536041    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:09.550784    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:09.550796    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:09.565960    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:09.565976    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:09.583879    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:09.583891    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:09.602089    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:09.602099    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:09.613926    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:09.613935    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:09.647926    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:09.647938    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:09.661823    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:09.661836    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:09.672986    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:09.672999    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:09.699337    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:09.699357    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:09.741938    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:09.741949    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:09.759842    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:09.759854    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:09.771204    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:09.771216    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:09.782945    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:09.782955    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:12.300078    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:17.302663    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:17.302928    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:17.335463    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:17.335571    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:17.352419    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:17.352500    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:17.364805    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:17.364872    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:17.380984    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:17.381063    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:17.391653    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:17.391720    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:17.402573    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:17.402643    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:17.412721    4133 logs.go:276] 0 containers: []
	W0311 04:19:17.412731    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:17.412787    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:17.423520    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:17.423538    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:17.423544    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:17.435528    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:17.435539    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:17.451356    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:17.451368    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:17.462948    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:17.462958    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:17.487104    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:17.487112    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:17.499089    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:17.499105    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:17.514014    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:17.514027    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:17.525600    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:17.525611    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:17.545293    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:17.545306    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:17.582024    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:17.582038    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:17.619090    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:17.619101    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:17.634226    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:17.634240    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:17.653858    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:17.653867    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:17.658143    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:17.658149    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:17.692245    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:17.692260    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:17.707412    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:17.707426    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:17.721407    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:17.721420    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:20.239874    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:25.242066    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:25.242384    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:25.272923    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:25.273048    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:25.294405    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:25.294493    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:25.307524    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:25.307600    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:25.319255    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:25.319329    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:25.330909    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:25.330981    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:25.342079    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:25.342147    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:25.352718    4133 logs.go:276] 0 containers: []
	W0311 04:19:25.352732    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:25.352792    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:25.363587    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:25.363604    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:25.363610    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:25.378595    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:25.378608    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:25.401977    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:25.401987    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:25.413602    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:25.413613    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:25.426931    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:25.426944    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:25.441667    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:25.441678    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:25.452509    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:25.452521    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:25.470179    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:25.470192    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:25.481384    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:25.481394    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:25.506581    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:25.506592    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:25.511205    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:25.511211    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:25.551843    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:25.551855    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:25.567652    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:25.567663    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:25.607125    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:25.607146    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:25.647247    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:25.647258    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:25.658794    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:25.658810    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:25.670332    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:25.670342    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:28.183978    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:33.184706    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:33.184947    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:33.209219    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:33.209334    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:33.224956    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:33.225042    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:33.239874    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:33.239944    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:33.254204    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:33.254269    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:33.264584    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:33.264646    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:33.275381    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:33.275446    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:33.286020    4133 logs.go:276] 0 containers: []
	W0311 04:19:33.286033    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:33.286085    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:33.296608    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:33.296629    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:33.296637    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:33.331731    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:33.331744    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:33.369887    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:33.369899    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:33.381272    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:33.381285    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:33.417903    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:33.417917    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:33.447841    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:33.447852    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:33.486010    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:33.486021    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:33.511654    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:33.511665    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:33.516722    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:33.516731    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:33.531194    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:33.531203    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:33.545300    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:33.545313    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:33.557678    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:33.557688    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:33.572876    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:33.572888    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:33.587399    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:33.587412    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:33.599550    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:33.599564    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:33.612023    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:33.612040    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:33.631312    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:33.631333    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:36.144990    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:41.147158    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:41.147516    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:41.181595    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:41.181740    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:41.202484    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:41.202584    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:41.219365    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:41.219441    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:41.233292    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:41.233359    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:41.243814    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:41.243884    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:41.254335    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:41.254403    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:41.264102    4133 logs.go:276] 0 containers: []
	W0311 04:19:41.264112    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:41.264170    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:41.274843    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:41.274860    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:41.274865    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:41.286299    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:41.286312    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:41.320185    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:41.320198    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:41.334562    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:41.334575    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:41.349497    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:41.349507    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:41.372550    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:41.372562    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:41.387819    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:41.387835    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:41.426439    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:41.426450    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:41.463330    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:41.463346    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:41.478128    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:41.478139    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:41.489959    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:41.489970    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:41.502511    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:41.502521    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:41.526090    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:41.526101    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:41.530274    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:41.530281    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:41.543959    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:41.543973    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:41.559172    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:41.559187    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:41.570508    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:41.570520    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:44.087436    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:49.088510    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:49.088612    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:49.101746    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:49.101831    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:49.112539    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:49.112604    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:49.122853    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:49.122925    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:49.133241    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:49.133315    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:49.143138    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:49.143203    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:49.154082    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:49.154156    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:49.163848    4133 logs.go:276] 0 containers: []
	W0311 04:19:49.163859    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:49.163915    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:49.174281    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:49.174297    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:49.174302    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:49.188580    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:49.188594    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:49.200579    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:49.200589    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:49.218022    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:49.218031    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:49.229693    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:49.229703    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:49.267404    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:49.267418    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:49.304423    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:49.304436    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:49.319756    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:49.319767    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:49.331401    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:49.331411    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:49.343303    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:49.343314    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:49.358232    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:49.358243    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:49.376719    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:49.376731    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:49.401018    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:49.401026    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:49.405241    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:49.405249    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:49.439806    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:49.439815    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:49.457963    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:49.457977    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:49.472100    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:49.472108    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:51.988557    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:56.990325    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:56.990455    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:57.007851    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:57.007931    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:57.019308    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:57.019381    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:57.029169    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:57.029237    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:57.039702    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:57.039778    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:57.061939    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:57.062019    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:57.075417    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:57.075482    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:57.086513    4133 logs.go:276] 0 containers: []
	W0311 04:19:57.086529    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:57.086591    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:57.101577    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:57.101596    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:57.101603    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:57.120330    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:57.120344    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:57.131818    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:57.131830    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:57.169725    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:57.169738    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:57.183460    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:57.183474    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:57.197767    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:57.197778    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:57.209184    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:57.209197    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:57.220125    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:57.220137    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:57.244168    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:57.244176    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:57.256437    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:57.256449    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:57.261069    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:57.261078    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:57.275609    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:57.275621    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:57.286969    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:57.286979    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:57.305700    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:57.305710    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:57.320981    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:57.320992    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:57.334917    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:57.334931    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:57.369413    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:57.369427    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:59.909408    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:04.911583    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:04.911719    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:04.932040    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:04.932153    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:04.946581    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:04.946656    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:04.958986    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:04.959058    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:04.969742    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:04.969819    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:04.980121    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:04.980213    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:04.991073    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:04.991140    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:05.001277    4133 logs.go:276] 0 containers: []
	W0311 04:20:05.001289    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:05.001344    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:05.012176    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:05.012193    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:05.012198    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:05.049241    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:05.049254    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:05.061434    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:05.061447    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:05.073091    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:05.073103    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:05.087238    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:05.087251    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:05.101387    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:05.101399    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:05.112949    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:05.112959    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:05.127827    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:05.127838    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:05.139654    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:05.139664    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:05.177841    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:05.177849    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:05.181916    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:05.181924    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:05.218613    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:05.218624    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:05.234515    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:05.234529    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:05.245589    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:05.245599    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:05.260312    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:05.260325    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:05.271495    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:05.271507    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:05.288319    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:05.288331    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:07.813623    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:12.816301    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:12.816798    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:12.852503    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:12.852641    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:12.873227    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:12.873342    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:12.888417    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:12.888499    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:12.907595    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:12.907668    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:12.920850    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:12.920919    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:12.931550    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:12.931611    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:12.942336    4133 logs.go:276] 0 containers: []
	W0311 04:20:12.942347    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:12.942406    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:12.953024    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:12.953042    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:12.953048    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:12.968751    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:12.968767    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:12.984523    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:12.984533    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:13.019714    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:13.019729    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:13.033826    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:13.033838    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:13.046425    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:13.046435    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:13.060904    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:13.060914    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:13.097487    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:13.097498    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:13.135557    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:13.135570    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:13.148082    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:13.148095    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:13.161221    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:13.161233    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:13.172793    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:13.172826    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:13.196795    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:13.196802    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:13.208218    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:13.208227    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:13.212828    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:13.212835    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:13.226118    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:13.226130    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:13.237117    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:13.237128    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:15.757889    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:20.760454    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:20.760796    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:20.791166    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:20.791300    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:20.811183    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:20.811281    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:20.825444    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:20.825529    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:20.837529    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:20.837600    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:20.848075    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:20.848151    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:20.858836    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:20.858908    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:20.869054    4133 logs.go:276] 0 containers: []
	W0311 04:20:20.869065    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:20.869120    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:20.879618    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:20.879637    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:20.879642    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:20.891521    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:20.891532    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:20.926380    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:20.926394    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:20.940479    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:20.940492    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:20.981757    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:20.981776    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:20.996740    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:20.996753    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:21.009030    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:21.009044    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:21.024555    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:21.024567    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:21.040260    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:21.040271    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:21.081917    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:21.081928    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:21.093555    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:21.093566    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:21.116474    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:21.116485    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:21.128107    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:21.128117    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:21.140036    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:21.140048    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:21.152251    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:21.152263    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:21.156910    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:21.156917    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:21.179315    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:21.179327    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:23.695955    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:28.698269    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:28.698546    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:28.726988    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:28.727084    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:28.740259    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:28.740335    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:28.751804    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:28.751873    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:28.762701    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:28.762775    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:28.773573    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:28.773647    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:28.788421    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:28.788490    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:28.798455    4133 logs.go:276] 0 containers: []
	W0311 04:20:28.798465    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:28.798521    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:28.808610    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:28.808631    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:28.808637    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:28.846724    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:28.846736    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:28.885039    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:28.885053    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:28.899366    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:28.899378    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:28.914488    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:28.914499    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:28.926141    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:28.926152    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:28.937104    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:28.937115    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:28.960151    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:28.960160    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:28.975094    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:28.975105    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:28.987428    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:28.987438    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:29.000269    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:29.000280    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:29.004911    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:29.004920    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:29.026500    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:29.026509    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:29.038087    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:29.038099    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:29.055045    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:29.055057    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:29.066654    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:29.066664    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:29.104347    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:29.104358    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:31.624273    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:36.626688    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:36.626923    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:36.661424    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:36.661507    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:36.674784    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:36.674855    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:36.686121    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:36.686191    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:36.696495    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:36.696558    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:36.707950    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:36.708014    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:36.718639    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:36.718707    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:36.728251    4133 logs.go:276] 0 containers: []
	W0311 04:20:36.728264    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:36.728315    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:36.742037    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:36.742055    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:36.742064    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:36.778031    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:36.778044    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:36.791843    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:36.791855    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:36.806278    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:36.806295    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:36.817509    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:36.817521    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:36.832853    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:36.832866    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:36.846376    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:36.846387    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:36.858525    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:36.858536    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:36.896913    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:36.896921    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:36.911390    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:36.911401    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:36.950239    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:36.950251    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:36.967132    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:36.967145    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:36.978608    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:36.978623    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:36.983213    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:36.983220    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:36.994841    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:36.994851    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:37.006780    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:37.006791    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:37.021080    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:37.021091    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:39.545208    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:44.547343    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:44.547531    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:44.562867    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:44.562958    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:44.574963    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:44.575038    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:44.585804    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:44.585873    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:44.596493    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:44.596562    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:44.607095    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:44.607159    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:44.617848    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:44.617913    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:44.628746    4133 logs.go:276] 0 containers: []
	W0311 04:20:44.628762    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:44.628823    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:44.639051    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:44.639070    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:44.639076    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:44.644122    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:44.644129    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:44.657780    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:44.657790    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:44.672388    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:44.672402    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:44.683493    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:44.683505    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:44.699280    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:44.699294    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:44.711236    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:44.711247    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:44.749875    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:44.749887    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:44.787458    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:44.787472    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:44.801795    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:44.801805    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:44.816724    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:44.816736    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:44.839545    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:44.839555    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:44.881004    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:44.881018    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:44.898642    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:44.898653    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:44.910076    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:44.910089    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:44.921462    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:44.921471    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:44.939125    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:44.939136    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:47.452800    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:52.453646    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:52.453704    4133 kubeadm.go:591] duration metric: took 4m5.126602625s to restartPrimaryControlPlane
	W0311 04:20:52.453751    4133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 04:20:52.453773    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0311 04:20:53.453167    4133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 04:20:53.458060    4133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 04:20:53.460824    4133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 04:20:53.463611    4133 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 04:20:53.463618    4133 kubeadm.go:156] found existing configuration files:
	
	I0311 04:20:53.463641    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf
	I0311 04:20:53.466697    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 04:20:53.466724    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 04:20:53.469471    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf
	I0311 04:20:53.472089    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 04:20:53.472114    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 04:20:53.475131    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf
	I0311 04:20:53.478147    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 04:20:53.478177    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 04:20:53.480782    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf
	I0311 04:20:53.483699    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 04:20:53.483723    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 04:20:53.486804    4133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 04:20:53.506232    4133 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0311 04:20:53.506258    4133 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 04:20:53.554244    4133 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 04:20:53.554325    4133 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 04:20:53.554374    4133 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 04:20:53.605530    4133 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 04:20:53.613734    4133 out.go:204]   - Generating certificates and keys ...
	I0311 04:20:53.613766    4133 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 04:20:53.613796    4133 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 04:20:53.613850    4133 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 04:20:53.613891    4133 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 04:20:53.613934    4133 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 04:20:53.613969    4133 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 04:20:53.614002    4133 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 04:20:53.614034    4133 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 04:20:53.614071    4133 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 04:20:53.614120    4133 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 04:20:53.614144    4133 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 04:20:53.614171    4133 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 04:20:53.741838    4133 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 04:20:54.090972    4133 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 04:20:54.166405    4133 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 04:20:54.208171    4133 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 04:20:54.240510    4133 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 04:20:54.240849    4133 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 04:20:54.240879    4133 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 04:20:54.321338    4133 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 04:20:54.324642    4133 out.go:204]   - Booting up control plane ...
	I0311 04:20:54.324689    4133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 04:20:54.324735    4133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 04:20:54.324768    4133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 04:20:54.324819    4133 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 04:20:54.324900    4133 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 04:20:58.825781    4133 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502272 seconds
	I0311 04:20:58.825847    4133 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 04:20:58.829933    4133 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 04:20:59.338979    4133 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 04:20:59.339077    4133 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-745000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 04:20:59.842840    4133 kubeadm.go:309] [bootstrap-token] Using token: kg6c8y.a4p5bpadbpysmcdj
	I0311 04:20:59.848681    4133 out.go:204]   - Configuring RBAC rules ...
	I0311 04:20:59.848737    4133 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 04:20:59.848791    4133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 04:20:59.855517    4133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 04:20:59.856440    4133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 04:20:59.857263    4133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 04:20:59.858127    4133 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 04:20:59.861471    4133 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 04:21:00.018162    4133 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 04:21:00.246782    4133 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 04:21:00.247441    4133 kubeadm.go:309] 
	I0311 04:21:00.247470    4133 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 04:21:00.247473    4133 kubeadm.go:309] 
	I0311 04:21:00.247511    4133 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 04:21:00.247517    4133 kubeadm.go:309] 
	I0311 04:21:00.247554    4133 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 04:21:00.247593    4133 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 04:21:00.247667    4133 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 04:21:00.247671    4133 kubeadm.go:309] 
	I0311 04:21:00.247694    4133 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 04:21:00.247696    4133 kubeadm.go:309] 
	I0311 04:21:00.247719    4133 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 04:21:00.247722    4133 kubeadm.go:309] 
	I0311 04:21:00.247752    4133 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 04:21:00.247821    4133 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 04:21:00.247921    4133 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 04:21:00.247925    4133 kubeadm.go:309] 
	I0311 04:21:00.247966    4133 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 04:21:00.248009    4133 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 04:21:00.248013    4133 kubeadm.go:309] 
	I0311 04:21:00.248093    4133 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kg6c8y.a4p5bpadbpysmcdj \
	I0311 04:21:00.248159    4133 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e \
	I0311 04:21:00.248173    4133 kubeadm.go:309] 	--control-plane 
	I0311 04:21:00.248175    4133 kubeadm.go:309] 
	I0311 04:21:00.248218    4133 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 04:21:00.248223    4133 kubeadm.go:309] 
	I0311 04:21:00.248264    4133 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kg6c8y.a4p5bpadbpysmcdj \
	I0311 04:21:00.248314    4133 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e 
	I0311 04:21:00.248373    4133 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 04:21:00.248380    4133 cni.go:84] Creating CNI manager for ""
	I0311 04:21:00.248387    4133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:21:00.252402    4133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 04:21:00.259336    4133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 04:21:00.263018    4133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 04:21:00.267820    4133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 04:21:00.267866    4133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-745000 minikube.k8s.io/updated_at=2024_03_11T04_21_00_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563 minikube.k8s.io/name=running-upgrade-745000 minikube.k8s.io/primary=true
	I0311 04:21:00.267866    4133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 04:21:00.310468    4133 ops.go:34] apiserver oom_adj: -16
	I0311 04:21:00.310496    4133 kubeadm.go:1106] duration metric: took 42.670792ms to wait for elevateKubeSystemPrivileges
	W0311 04:21:00.310572    4133 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 04:21:00.310578    4133 kubeadm.go:393] duration metric: took 4m12.997984292s to StartCluster
	I0311 04:21:00.310587    4133 settings.go:142] acquiring lock: {Name:mk914df43a11d01b4609d1cefd86c6d6814b7b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:21:00.310663    4133 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:21:00.311049    4133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/kubeconfig: {Name:mka1b8ea0dffa0092f34d56c8689bdb2eb2631e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:21:00.311264    4133 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:21:00.315293    4133 out.go:177] * Verifying Kubernetes components...
	I0311 04:21:00.311281    4133 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 04:21:00.311341    4133 config.go:182] Loaded profile config "running-upgrade-745000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:21:00.322292    4133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:21:00.322296    4133 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-745000"
	I0311 04:21:00.322308    4133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-745000"
	I0311 04:21:00.322293    4133 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-745000"
	I0311 04:21:00.322329    4133 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-745000"
	W0311 04:21:00.322339    4133 addons.go:243] addon storage-provisioner should already be in state true
	I0311 04:21:00.322349    4133 host.go:66] Checking if "running-upgrade-745000" exists ...
	I0311 04:21:00.323299    4133 kapi.go:59] client config for running-upgrade-745000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.key", CAFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10604ffd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 04:21:00.323411    4133 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-745000"
	W0311 04:21:00.323416    4133 addons.go:243] addon default-storageclass should already be in state true
	I0311 04:21:00.323424    4133 host.go:66] Checking if "running-upgrade-745000" exists ...
	I0311 04:21:00.328276    4133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:21:00.332334    4133 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 04:21:00.332340    4133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 04:21:00.332346    4133 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/running-upgrade-745000/id_rsa Username:docker}
	I0311 04:21:00.332921    4133 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 04:21:00.332926    4133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 04:21:00.332930    4133 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/running-upgrade-745000/id_rsa Username:docker}
	I0311 04:21:00.402392    4133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 04:21:00.407144    4133 api_server.go:52] waiting for apiserver process to appear ...
	I0311 04:21:00.407187    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:21:00.411090    4133 api_server.go:72] duration metric: took 99.818542ms to wait for apiserver process to appear ...
	I0311 04:21:00.411097    4133 api_server.go:88] waiting for apiserver healthz status ...
	I0311 04:21:00.411104    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:00.441766    4133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 04:21:00.442034    4133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 04:21:05.412286    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:05.412333    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:10.412884    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:10.412920    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:15.413006    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:15.413033    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:20.413374    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:20.413400    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:25.413658    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:25.413706    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:30.414102    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:30.414129    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0311 04:21:30.768277    4133 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0311 04:21:30.772628    4133 out.go:177] * Enabled addons: storage-provisioner
	I0311 04:21:30.782587    4133 addons.go:505] duration metric: took 30.472184666s for enable addons: enabled=[storage-provisioner]
	I0311 04:21:35.414700    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:35.414796    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:40.415923    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:40.415944    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:45.417001    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:45.417024    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:50.417359    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:50.417392    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:55.419022    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:55.419043    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:00.420975    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:00.421074    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:00.432946    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:00.433018    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:00.459522    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:00.459594    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:00.470583    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:00.470659    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:00.480816    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:00.480879    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:00.491546    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:00.491618    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:00.502005    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:00.502073    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:00.511769    4133 logs.go:276] 0 containers: []
	W0311 04:22:00.511778    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:00.511832    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:00.521936    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:00.521951    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:00.521957    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:00.546486    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:00.546495    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:00.582279    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:00.582288    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:00.587045    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:00.587054    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:00.623322    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:00.623334    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:00.635203    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:00.635218    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:00.650152    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:00.650164    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:00.665716    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:00.665728    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:00.676747    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:00.676758    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:00.687934    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:00.687944    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:00.702395    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:00.702408    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:00.718357    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:00.718368    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:00.730250    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:00.730262    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:03.251441    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:08.253813    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:08.254053    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:08.270233    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:08.270330    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:08.283822    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:08.283897    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:08.294941    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:08.295008    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:08.305519    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:08.305581    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:08.315721    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:08.315796    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:08.326257    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:08.326320    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:08.336471    4133 logs.go:276] 0 containers: []
	W0311 04:22:08.336480    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:08.336544    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:08.347744    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:08.347784    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:08.347794    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:08.383139    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:08.383154    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:08.397984    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:08.397997    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:08.409294    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:08.409308    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:08.424573    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:08.424587    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:08.436729    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:08.436741    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:08.462084    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:08.462096    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:08.479152    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:08.479164    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:08.515437    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:08.515446    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:08.519676    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:08.519685    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:08.533640    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:08.533650    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:08.544736    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:08.544746    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:08.558538    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:08.558547    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:11.078869    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:16.081102    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:16.081323    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:16.100784    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:16.100858    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:16.112026    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:16.112089    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:16.123111    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:16.123181    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:16.134716    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:16.134784    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:16.145363    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:16.145443    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:16.157977    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:16.158040    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:16.168194    4133 logs.go:276] 0 containers: []
	W0311 04:22:16.168207    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:16.168267    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:16.178276    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:16.178290    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:16.178296    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:16.212946    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:16.212957    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:16.248189    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:16.248201    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:16.267036    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:16.267048    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:16.281817    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:16.281829    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:16.299054    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:16.299064    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:16.318135    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:16.318146    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:16.329586    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:16.329598    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:16.334630    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:16.334637    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:16.349495    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:16.349505    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:16.360749    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:16.360766    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:16.371526    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:16.371536    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:16.383108    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:16.383118    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:18.908778    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:23.913889    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:23.914129    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:23.941039    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:23.941169    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:23.959277    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:23.959363    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:23.972711    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:23.972779    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:23.988227    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:23.988298    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:24.002266    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:24.002336    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:24.012791    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:24.012862    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:24.022581    4133 logs.go:276] 0 containers: []
	W0311 04:22:24.022596    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:24.022651    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:24.032756    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:24.032774    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:24.032779    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:24.046755    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:24.046768    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:24.060621    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:24.060633    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:24.072435    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:24.072446    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:24.083664    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:24.083673    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:24.097913    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:24.097927    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:24.112502    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:24.112510    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:24.129757    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:24.129770    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:24.142600    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:24.142614    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:24.178817    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:24.178827    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:24.183282    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:24.183292    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:24.218628    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:24.218640    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:24.243427    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:24.243435    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:26.761676    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:31.769155    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:31.769317    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:31.781382    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:31.781456    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:31.799926    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:31.800017    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:31.810319    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:31.810384    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:31.821564    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:31.821633    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:31.832387    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:31.832462    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:31.843517    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:31.843581    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:31.852932    4133 logs.go:276] 0 containers: []
	W0311 04:22:31.852943    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:31.852998    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:31.864150    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:31.864163    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:31.864168    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:31.868493    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:31.868498    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:31.908267    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:31.908280    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:31.922182    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:31.922195    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:31.933868    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:31.933884    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:31.951858    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:31.951873    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:31.985876    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:31.985885    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:31.997511    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:31.997525    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:32.012709    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:32.012722    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:32.027870    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:32.027881    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:32.041376    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:32.041386    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:32.063168    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:32.063180    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:32.086737    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:32.086746    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:34.605765    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:39.612133    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:39.612246    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:39.625267    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:39.625365    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:39.636053    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:39.636127    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:39.646619    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:39.646693    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:39.659272    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:39.659345    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:39.669322    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:39.669393    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:39.680474    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:39.680545    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:39.691039    4133 logs.go:276] 0 containers: []
	W0311 04:22:39.691049    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:39.691103    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:39.701206    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:39.701220    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:39.701226    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:39.736524    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:39.736539    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:39.751373    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:39.751387    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:39.764679    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:39.764690    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:39.783795    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:39.783809    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:39.806599    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:39.806608    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:39.817573    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:39.817585    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:39.829801    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:39.829812    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:39.865605    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:39.865616    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:39.870558    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:39.870565    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:39.882294    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:39.882307    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:39.893055    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:39.893065    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:39.904517    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:39.904527    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:42.425661    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:47.430386    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:47.430534    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:47.441662    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:47.441739    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:47.452283    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:47.452356    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:47.463261    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:47.463326    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:47.473532    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:47.473606    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:47.483766    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:47.483827    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:47.494161    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:47.494231    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:47.504824    4133 logs.go:276] 0 containers: []
	W0311 04:22:47.504835    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:47.504893    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:47.519153    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:47.519168    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:47.519173    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:47.533204    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:47.533214    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:47.546952    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:47.546962    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:47.565381    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:47.565393    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:47.581181    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:47.581193    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:47.593793    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:47.593804    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:47.611235    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:47.611245    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:47.627933    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:47.627944    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:47.661466    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:47.661478    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:47.673317    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:47.673326    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:47.698900    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:47.698913    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:47.703603    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:47.703612    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:47.715067    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:47.715077    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:50.252928    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:55.256664    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:55.256785    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:55.267449    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:55.267539    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:55.278549    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:55.278612    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:55.289322    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:55.289392    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:55.306706    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:55.306775    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:55.317146    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:55.317214    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:55.330569    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:55.330639    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:55.340923    4133 logs.go:276] 0 containers: []
	W0311 04:22:55.340931    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:55.340984    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:55.352163    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:55.352176    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:55.352183    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:55.388654    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:55.388665    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:55.424100    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:55.424111    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:55.436444    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:55.436454    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:55.451544    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:55.451554    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:55.463119    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:55.463128    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:55.475607    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:55.475618    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:55.480511    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:55.480519    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:55.495362    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:55.495372    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:55.508966    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:55.508976    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:55.520888    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:55.520897    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:55.538506    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:55.538516    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:55.550373    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:55.550382    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:58.078205    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:03.081343    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:03.081432    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:03.092862    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:03.092934    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:03.103746    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:03.103820    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:03.114534    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:23:03.114602    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:03.124841    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:03.124914    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:03.136206    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:03.136275    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:03.146979    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:03.147055    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:03.157638    4133 logs.go:276] 0 containers: []
	W0311 04:23:03.157650    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:03.157709    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:03.168292    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:03.168309    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:03.168314    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:03.172992    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:03.172999    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:03.208621    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:03.208632    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:03.222933    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:03.222947    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:03.236682    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:03.236691    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:03.256016    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:03.256031    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:03.267394    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:03.267408    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:03.282280    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:03.282290    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:03.293831    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:03.293841    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:03.315673    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:03.315685    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:03.326879    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:03.326888    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:03.351891    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:03.351907    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:03.386710    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:03.386719    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:05.900720    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:10.903531    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:10.903640    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:10.914896    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:10.914979    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:10.927879    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:10.927952    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:10.938985    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:23:10.939058    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:10.949783    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:10.949849    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:10.960030    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:10.960104    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:10.970263    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:10.970325    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:10.980386    4133 logs.go:276] 0 containers: []
	W0311 04:23:10.980401    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:10.980462    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:10.990408    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:10.990421    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:10.990427    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:11.002145    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:11.002157    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:11.017384    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:11.017399    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:11.034632    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:11.034645    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:11.070050    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:11.070062    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:11.075119    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:11.075128    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:11.109187    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:11.109197    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:11.123635    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:11.123645    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:11.137585    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:11.137596    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:11.149019    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:11.149029    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:11.172028    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:11.172035    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:11.183053    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:11.183064    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:11.194448    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:11.194457    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:13.708232    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:18.709887    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:18.709994    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:18.721458    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:18.721531    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:18.732777    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:18.732848    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:18.745419    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:18.745493    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:18.757034    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:18.757109    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:18.768211    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:18.768280    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:18.779410    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:18.779476    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:18.789760    4133 logs.go:276] 0 containers: []
	W0311 04:23:18.789770    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:18.789828    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:18.800751    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:18.800769    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:18.800775    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:18.819489    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:18.819500    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:18.832493    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:18.832505    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:18.867263    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:18.867275    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:18.882838    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:18.882851    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:18.900871    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:18.900883    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:18.925409    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:18.925420    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:18.929680    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:18.929690    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:18.943190    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:18.943201    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:18.954785    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:18.954796    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:18.966333    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:18.966344    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:18.978155    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:18.978166    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:19.013517    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:19.013526    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:19.027850    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:19.027860    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:19.038972    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:19.038984    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:21.552004    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:26.554365    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:26.554480    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:26.573329    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:26.573483    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:26.586746    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:26.586823    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:26.598342    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:26.598418    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:26.610997    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:26.611070    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:26.622733    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:26.622810    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:26.635018    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:26.635093    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:26.646334    4133 logs.go:276] 0 containers: []
	W0311 04:23:26.646348    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:26.646405    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:26.657621    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:26.657643    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:26.657648    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:26.671587    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:26.671598    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:26.683049    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:26.683063    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:26.695581    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:26.695594    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:26.711174    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:26.711184    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:26.723040    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:26.723049    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:26.741452    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:26.741463    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:26.765622    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:26.765633    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:26.770138    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:26.770145    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:26.785359    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:26.785369    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:26.820841    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:26.820851    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:26.856114    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:26.856125    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:26.868092    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:26.868103    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:26.879609    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:26.879623    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:26.904080    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:26.904095    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:29.417631    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:34.419441    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:34.419601    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:34.434707    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:34.434781    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:34.445346    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:34.445416    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:34.456246    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:34.456324    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:34.467246    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:34.467314    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:34.479582    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:34.479642    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:34.494163    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:34.494226    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:34.504456    4133 logs.go:276] 0 containers: []
	W0311 04:23:34.504468    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:34.504517    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:34.514652    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:34.514673    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:34.514678    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:34.551278    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:34.551287    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:34.564655    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:34.564665    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:34.576230    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:34.576241    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:34.580894    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:34.580904    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:34.595732    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:34.595748    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:34.611686    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:34.611697    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:34.629238    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:34.629247    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:34.641632    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:34.641649    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:34.653493    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:34.653511    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:34.678051    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:34.678059    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:34.713320    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:34.713334    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:34.724450    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:34.724459    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:34.736520    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:34.736533    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:34.748108    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:34.748121    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:37.261836    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:42.264098    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:42.264213    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:42.274844    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:42.274917    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:42.285650    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:42.285715    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:42.296163    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:42.296237    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:42.307890    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:42.307957    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:42.318556    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:42.318629    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:42.329094    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:42.329165    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:42.339743    4133 logs.go:276] 0 containers: []
	W0311 04:23:42.339756    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:42.339825    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:42.350208    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:42.350225    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:42.350231    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:42.385381    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:42.385393    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:42.389496    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:42.389505    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:42.423973    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:42.423987    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:42.440617    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:42.440627    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:42.452340    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:42.452350    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:42.469110    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:42.469123    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:42.483575    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:42.483586    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:42.494986    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:42.494997    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:42.506481    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:42.506491    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:42.524510    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:42.524521    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:42.536165    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:42.536175    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:42.560941    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:42.560950    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:42.577946    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:42.577956    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:42.589418    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:42.589430    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:45.103800    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:50.104629    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:50.104731    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:50.115153    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:50.115225    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:50.125259    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:50.125327    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:50.139675    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:50.139757    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:50.151582    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:50.151652    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:50.163912    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:50.163987    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:50.174842    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:50.174909    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:50.185087    4133 logs.go:276] 0 containers: []
	W0311 04:23:50.185097    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:50.185156    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:50.195722    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:50.195740    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:50.195745    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:50.200538    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:50.200543    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:50.211973    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:50.211984    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:50.223695    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:50.223704    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:50.258148    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:50.258155    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:50.292176    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:50.292187    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:50.307039    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:50.307049    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:50.318529    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:50.318541    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:50.333155    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:50.333167    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:50.348017    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:50.348028    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:50.361034    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:50.361048    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:50.372275    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:50.372286    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:50.388028    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:50.388039    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:50.406893    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:50.406903    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:50.424079    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:50.424093    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:52.949671    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:57.951692    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:57.951770    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:57.962392    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:57.962451    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:57.972503    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:57.972572    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:57.983192    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:57.983254    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:57.994084    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:57.994144    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:58.004910    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:58.004976    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:58.016186    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:58.016277    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:58.026500    4133 logs.go:276] 0 containers: []
	W0311 04:23:58.026511    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:58.026569    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:58.036802    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:58.036820    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:58.036826    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:58.048876    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:58.048890    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:58.060178    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:58.060186    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:58.096302    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:58.096313    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:58.111570    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:58.111583    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:58.122885    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:58.122893    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:58.134968    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:58.134977    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:58.150275    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:58.150285    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:58.177167    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:58.177177    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:58.181723    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:58.181732    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:58.215604    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:58.215614    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:58.230825    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:58.230836    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:58.242441    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:58.242450    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:58.254586    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:58.254596    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:58.279081    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:58.279090    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:00.792694    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:05.794842    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:05.794946    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:05.808060    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:05.808135    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:05.821182    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:05.821271    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:05.833216    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:05.833283    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:05.845047    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:05.845117    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:05.856642    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:05.856709    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:05.867982    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:05.868045    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:05.880626    4133 logs.go:276] 0 containers: []
	W0311 04:24:05.880639    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:05.880696    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:05.892070    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:05.892088    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:05.892093    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:05.904431    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:05.904445    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:05.916753    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:05.916762    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:05.934695    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:05.934706    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:05.958487    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:05.958498    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:05.971933    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:05.971943    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:05.983684    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:05.983694    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:05.995754    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:05.995765    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:06.007150    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:06.007162    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:06.026146    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:06.026158    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:06.038026    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:06.038036    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:06.073518    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:06.073530    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:06.078083    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:06.078092    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:06.114200    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:06.114211    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:06.137259    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:06.137270    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:08.650965    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:13.653200    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:13.653287    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:13.664924    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:13.664998    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:13.677886    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:13.677961    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:13.688708    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:13.688785    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:13.699700    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:13.699771    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:13.710509    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:13.710583    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:13.721248    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:13.721314    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:13.731672    4133 logs.go:276] 0 containers: []
	W0311 04:24:13.731685    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:13.731743    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:13.743197    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:13.743214    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:13.743219    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:13.748250    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:13.748257    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:13.763012    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:13.763025    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:13.775104    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:13.775114    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:13.790103    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:13.790114    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:13.801137    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:13.801148    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:13.818007    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:13.818019    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:13.829768    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:13.829778    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:13.863649    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:13.863661    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:13.878752    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:13.878763    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:13.890950    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:13.890960    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:13.902037    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:13.902047    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:13.935621    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:13.935629    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:13.946929    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:13.946940    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:13.958301    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:13.958312    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:16.483931    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:21.486177    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:21.486314    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:21.498582    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:21.498657    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:21.510299    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:21.510372    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:21.521744    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:21.521820    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:21.532344    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:21.532414    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:21.543293    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:21.543362    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:21.553671    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:21.553731    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:21.564354    4133 logs.go:276] 0 containers: []
	W0311 04:24:21.564368    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:21.564429    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:21.575421    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:21.575437    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:21.575442    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:21.593266    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:21.593277    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:21.611593    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:21.611602    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:21.622935    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:21.622948    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:21.639681    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:21.639699    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:21.651686    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:21.651700    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:21.667375    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:21.667386    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:21.680996    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:21.681007    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:21.716467    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:21.716477    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:21.721067    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:21.721078    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:21.734579    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:21.734591    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:21.746571    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:21.746582    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:21.780358    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:21.780368    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:21.792517    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:21.792528    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:21.810406    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:21.810419    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:24.335854    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:29.338079    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:29.338164    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:29.351163    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:29.351237    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:29.362455    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:29.362530    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:29.373816    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:29.373894    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:29.385131    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:29.385198    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:29.396349    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:29.396419    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:29.408658    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:29.408725    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:29.419197    4133 logs.go:276] 0 containers: []
	W0311 04:24:29.419207    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:29.419262    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:29.429407    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:29.429423    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:29.429428    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:29.441355    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:29.441373    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:29.466094    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:29.466112    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:29.478382    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:29.478393    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:29.490013    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:29.490028    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:29.503843    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:29.503857    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:29.508752    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:29.508760    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:29.543933    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:29.543945    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:29.557941    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:29.557952    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:29.569537    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:29.569546    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:29.581131    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:29.581141    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:29.616896    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:29.616904    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:29.631120    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:29.631131    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:29.643794    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:29.643804    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:29.658686    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:29.658696    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:32.179636    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:37.181835    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:37.181987    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:37.193469    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:37.193539    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:37.205899    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:37.205968    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:37.217505    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:37.217577    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:37.228835    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:37.228947    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:37.240744    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:37.240811    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:37.253773    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:37.253848    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:37.265203    4133 logs.go:276] 0 containers: []
	W0311 04:24:37.265213    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:37.265276    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:37.276551    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:37.276570    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:37.276575    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:37.301323    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:37.301333    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:37.312803    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:37.312813    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:37.327318    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:37.327329    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:37.340897    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:37.340908    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:37.352906    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:37.352919    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:37.365247    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:37.365262    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:37.380938    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:37.380951    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:37.411611    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:37.411629    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:37.420226    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:37.420247    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:37.440319    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:37.440333    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:37.462260    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:37.462272    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:37.477321    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:37.477333    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:37.512781    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:37.512789    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:37.548272    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:37.548283    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:40.061811    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:45.064009    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:45.064087    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:45.075118    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:45.075187    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:45.086915    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:45.086989    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:45.098500    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:45.098573    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:45.114532    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:45.114605    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:45.129312    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:45.129389    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:45.140642    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:45.140715    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:45.152345    4133 logs.go:276] 0 containers: []
	W0311 04:24:45.152358    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:45.152420    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:45.164187    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:45.164205    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:45.164210    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:45.187675    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:45.187685    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:45.204352    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:45.204362    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:45.215902    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:45.215911    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:45.228915    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:45.228925    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:45.244328    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:45.244339    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:45.256459    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:45.256469    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:45.291966    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:45.291975    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:45.328112    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:45.328126    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:45.344756    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:45.344767    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:45.356584    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:45.356599    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:45.375713    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:45.375732    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:45.380967    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:45.380978    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:45.397286    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:45.397301    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:45.411623    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:45.411637    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:47.925683    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:52.928056    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:52.928196    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:52.947353    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:52.947425    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:52.958283    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:52.958346    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:52.969042    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:52.969115    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:52.980194    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:52.980274    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:52.991286    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:52.991357    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:53.003301    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:53.003367    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:53.015932    4133 logs.go:276] 0 containers: []
	W0311 04:24:53.015945    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:53.015997    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:53.027482    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:53.027498    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:53.027503    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:53.040463    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:53.040474    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:53.056066    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:53.056077    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:53.067901    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:53.067910    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:53.079799    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:53.079811    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:53.114599    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:53.114609    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:53.118759    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:53.118768    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:53.130012    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:53.130023    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:53.153859    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:53.153867    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:53.171953    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:53.171970    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:53.183441    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:53.183458    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:53.198619    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:53.198629    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:53.214338    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:53.214351    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:53.251674    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:53.251688    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:53.269695    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:53.269705    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:55.786625    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:25:00.786769    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:25:00.791010    4133 out.go:177] 
	W0311 04:25:00.795121    4133 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0311 04:25:00.795126    4133 out.go:239] * 
	* 
	W0311 04:25:00.795589    4133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:25:00.811051    4133 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-745000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-11 04:25:00.895333 -0700 PDT m=+3038.411087417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-745000 -n running-upgrade-745000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-745000 -n running-upgrade-745000: exit status 2 (15.644657709s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-745000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-896000 sudo                                | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo                                | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo cat                            | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo cat                            | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo                                | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo                                | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo                                | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo cat                            | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo cat                            | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo                                | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo                                | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo                                | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo find                           | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-896000 sudo crio                           | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-896000                                     | cilium-896000             | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT | 11 Mar 24 04:14 PDT |
	| start   | -p kubernetes-upgrade-368000                         | kubernetes-upgrade-368000 | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-255000                             | offline-docker-255000     | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT | 11 Mar 24 04:14 PDT |
	| stop    | -p kubernetes-upgrade-368000                         | kubernetes-upgrade-368000 | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT | 11 Mar 24 04:14 PDT |
	| start   | -p kubernetes-upgrade-368000                         | kubernetes-upgrade-368000 | jenkins | v1.32.0 | 11 Mar 24 04:14 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                    |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-629000                            | minikube                  | jenkins | v1.26.0 | 11 Mar 24 04:14 PDT | 11 Mar 24 04:16 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-368000                         | kubernetes-upgrade-368000 | jenkins | v1.32.0 | 11 Mar 24 04:15 PDT | 11 Mar 24 04:15 PDT |
	| start   | -p running-upgrade-745000                            | minikube                  | jenkins | v1.26.0 | 11 Mar 24 04:15 PDT | 11 Mar 24 04:16 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p running-upgrade-745000                            | running-upgrade-745000    | jenkins | v1.32.0 | 11 Mar 24 04:16 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-629000 stop                          | minikube                  | jenkins | v1.26.0 | 11 Mar 24 04:16 PDT | 11 Mar 24 04:16 PDT |
	| start   | -p stopped-upgrade-629000                            | stopped-upgrade-629000    | jenkins | v1.32.0 | 11 Mar 24 04:16 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 04:16:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 04:16:49.073217    4187 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:16:49.073362    4187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:16:49.073366    4187 out.go:304] Setting ErrFile to fd 2...
	I0311 04:16:49.073368    4187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:16:49.073527    4187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:16:49.074988    4187 out.go:298] Setting JSON to false
	I0311 04:16:49.094309    4187 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2781,"bootTime":1710153028,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:16:49.094387    4187 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:16:49.099084    4187 out.go:177] * [stopped-upgrade-629000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:16:49.110042    4187 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:16:49.106131    4187 notify.go:220] Checking for updates...
	I0311 04:16:49.117882    4187 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:16:49.122057    4187 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:16:49.125102    4187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:16:49.126270    4187 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:16:49.129008    4187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:16:49.132347    4187 config.go:182] Loaded profile config "stopped-upgrade-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:16:49.135044    4187 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 04:16:49.138039    4187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:16:49.142062    4187 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:16:49.149017    4187 start.go:297] selected driver: qemu2
	I0311 04:16:49.149036    4187 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50372 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:16:49.149115    4187 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:16:49.152203    4187 cni.go:84] Creating CNI manager for ""
	I0311 04:16:49.152229    4187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:16:49.152253    4187 start.go:340] cluster config:
	{Name:stopped-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50372 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:16:49.152344    4187 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:16:49.156076    4187 out.go:177] * Starting "stopped-upgrade-629000" primary control-plane node in "stopped-upgrade-629000" cluster
	I0311 04:16:49.162068    4187 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 04:16:49.162128    4187 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0311 04:16:49.162145    4187 cache.go:56] Caching tarball of preloaded images
	I0311 04:16:49.162285    4187 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:16:49.162293    4187 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0311 04:16:49.162375    4187 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/config.json ...
	I0311 04:16:49.162678    4187 start.go:360] acquireMachinesLock for stopped-upgrade-629000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:16:49.162716    4187 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "stopped-upgrade-629000"
	I0311 04:16:49.162727    4187 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:16:49.162734    4187 fix.go:54] fixHost starting: 
	I0311 04:16:49.162850    4187 fix.go:112] recreateIfNeeded on stopped-upgrade-629000: state=Stopped err=<nil>
	W0311 04:16:49.162880    4187 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:16:49.173017    4187 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-629000" ...
	I0311 04:16:46.692048    4133 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.1430455s)
	I0311 04:16:46.692087    4133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 04:16:46.692529    4133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0311 04:16:46.698162    4133 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0311 04:16:46.698241    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0311 04:16:46.753084    4133 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 04:16:46.753099    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0311 04:16:46.997208    4133 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 04:16:46.997252    4133 cache_images.go:92] duration metric: took 4.075313209s to LoadCachedImages
	W0311 04:16:46.997285    4133 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0311 04:16:46.997293    4133 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0311 04:16:46.997354    4133 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-745000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 04:16:46.997412    4133 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0311 04:16:47.010791    4133 cni.go:84] Creating CNI manager for ""
	I0311 04:16:47.010801    4133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:16:47.010806    4133 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 04:16:47.010813    4133 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-745000 NodeName:running-upgrade-745000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 04:16:47.010877    4133 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-745000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 04:16:47.010934    4133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0311 04:16:47.013908    4133 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 04:16:47.013936    4133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 04:16:47.017027    4133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0311 04:16:47.022213    4133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 04:16:47.027384    4133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0311 04:16:47.033175    4133 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0311 04:16:47.034552    4133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:16:47.099336    4133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 04:16:47.104672    4133 certs.go:68] Setting up /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000 for IP: 10.0.2.15
	I0311 04:16:47.104680    4133 certs.go:194] generating shared ca certs ...
	I0311 04:16:47.104688    4133 certs.go:226] acquiring lock for ca certs: {Name:mk0eff4ed47e91bcbb09c749a04fbf8f2901eda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:16:47.104849    4133 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key
	I0311 04:16:47.104895    4133 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key
	I0311 04:16:47.104901    4133 certs.go:256] generating profile certs ...
	I0311 04:16:47.104970    4133 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.key
	I0311 04:16:47.104985    4133 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key.cd4b92a9
	I0311 04:16:47.104999    4133 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt.cd4b92a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0311 04:16:47.190275    4133 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt.cd4b92a9 ...
	I0311 04:16:47.190291    4133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt.cd4b92a9: {Name:mk8bd2020a245cdde288e261a892ec5c133a8401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:16:47.190632    4133 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key.cd4b92a9 ...
	I0311 04:16:47.190638    4133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key.cd4b92a9: {Name:mk16202969482b6cca2ef030f4bb0253f9f004b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:16:47.190768    4133 certs.go:381] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt.cd4b92a9 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt
	I0311 04:16:47.190903    4133 certs.go:385] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key.cd4b92a9 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key
	I0311 04:16:47.191070    4133 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/proxy-client.key
	I0311 04:16:47.191196    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434.pem (1338 bytes)
	W0311 04:16:47.191229    4133 certs.go:480] ignoring /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434_empty.pem, impossibly tiny 0 bytes
	I0311 04:16:47.191234    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 04:16:47.191262    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem (1082 bytes)
	I0311 04:16:47.191286    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem (1123 bytes)
	I0311 04:16:47.191310    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem (1675 bytes)
	I0311 04:16:47.191363    4133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem (1708 bytes)
	I0311 04:16:47.191686    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 04:16:47.200226    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 04:16:47.206912    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 04:16:47.214243    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 04:16:47.221840    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 04:16:47.229323    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 04:16:47.236691    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 04:16:47.244026    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 04:16:47.251147    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 04:16:47.258133    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434.pem --> /usr/share/ca-certificates/1434.pem (1338 bytes)
	I0311 04:16:47.265486    4133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem --> /usr/share/ca-certificates/14342.pem (1708 bytes)
	I0311 04:16:47.272641    4133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 04:16:47.277517    4133 ssh_runner.go:195] Run: openssl version
	I0311 04:16:47.279242    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 04:16:47.282442    4133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:16:47.283922    4133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:16:47.283941    4133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:16:47.285627    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 04:16:47.288570    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434.pem && ln -fs /usr/share/ca-certificates/1434.pem /etc/ssl/certs/1434.pem"
	I0311 04:16:47.291494    4133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434.pem
	I0311 04:16:47.292870    4133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 10:43 /usr/share/ca-certificates/1434.pem
	I0311 04:16:47.292893    4133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434.pem
	I0311 04:16:47.294777    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1434.pem /etc/ssl/certs/51391683.0"
	I0311 04:16:47.297935    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14342.pem && ln -fs /usr/share/ca-certificates/14342.pem /etc/ssl/certs/14342.pem"
	I0311 04:16:47.301143    4133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14342.pem
	I0311 04:16:47.302670    4133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 10:43 /usr/share/ca-certificates/14342.pem
	I0311 04:16:47.302693    4133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14342.pem
	I0311 04:16:47.304645    4133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14342.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 04:16:47.307295    4133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 04:16:47.308749    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 04:16:47.310444    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 04:16:47.312603    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 04:16:47.314473    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 04:16:47.316631    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 04:16:47.318325    4133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 04:16:47.320128    4133 kubeadm.go:391] StartCluster: {Name:running-upgrade-745000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50305 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:16:47.320212    4133 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 04:16:47.330551    4133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 04:16:47.334383    4133 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 04:16:47.334391    4133 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 04:16:47.334394    4133 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 04:16:47.334419    4133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 04:16:47.337689    4133 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.337912    4133 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-745000" does not appear in /Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:16:47.337971    4133 kubeconfig.go:62] /Users/jenkins/minikube-integration/18350-986/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-745000" cluster setting kubeconfig missing "running-upgrade-745000" context setting]
	I0311 04:16:47.338117    4133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/kubeconfig: {Name:mka1b8ea0dffa0092f34d56c8689bdb2eb2631e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:16:47.338786    4133 kapi.go:59] client config for running-upgrade-745000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.key", CAFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10604ffd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 04:16:47.339101    4133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 04:16:47.342207    4133 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-745000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0311 04:16:47.342213    4133 kubeadm.go:1153] stopping kube-system containers ...
	I0311 04:16:47.342253    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 04:16:47.352840    4133 docker.go:483] Stopping containers: [cd570c5dd104 3947628dca50 3b51857cc2b5 8c896a6db6a9 42c9c863cbbd 4d4cb543edc7 a033847a9c75 51ea7e87d708 479f672812ba 735cbefdffdd 7330a9adce6b 9d632451aa93]
	I0311 04:16:47.352906    4133 ssh_runner.go:195] Run: docker stop cd570c5dd104 3947628dca50 3b51857cc2b5 8c896a6db6a9 42c9c863cbbd 4d4cb543edc7 a033847a9c75 51ea7e87d708 479f672812ba 735cbefdffdd 7330a9adce6b 9d632451aa93
	I0311 04:16:47.363611    4133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 04:16:47.450915    4133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 04:16:47.454713    4133 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar 11 11:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 11 11:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 11 11:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar 11 11:16 /etc/kubernetes/scheduler.conf
	
	I0311 04:16:47.454744    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf
	I0311 04:16:47.457928    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.457961    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 04:16:47.461191    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf
	I0311 04:16:47.464129    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.464160    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 04:16:47.466823    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf
	I0311 04:16:47.469743    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.469763    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 04:16:47.472574    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf
	I0311 04:16:47.474990    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:16:47.475018    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 04:16:47.477886    4133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 04:16:47.480828    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:47.501780    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:48.090177    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:48.294490    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:48.315513    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:16:48.334969    4133 api_server.go:52] waiting for apiserver process to appear ...
	I0311 04:16:48.335047    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:16:48.837090    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:16:49.337080    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:16:49.837068    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:16:49.841998    4133 api_server.go:72] duration metric: took 1.507076625s to wait for apiserver process to appear ...
	I0311 04:16:49.842007    4133 api_server.go:88] waiting for apiserver healthz status ...
	I0311 04:16:49.842015    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:16:49.177162    4187 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50310-:22,hostfwd=tcp::50311-:2376,hostname=stopped-upgrade-629000 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/disk.qcow2
	I0311 04:16:49.223829    4187 main.go:141] libmachine: STDOUT: 
	I0311 04:16:49.223870    4187 main.go:141] libmachine: STDERR: 
	I0311 04:16:49.223877    4187 main.go:141] libmachine: Waiting for VM to start (ssh -p 50310 docker@127.0.0.1)...
	I0311 04:16:54.844016    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:16:54.844053    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:16:59.844257    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:16:59.844279    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:04.844518    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:04.844540    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:08.616419    4187 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/config.json ...
	I0311 04:17:08.616675    4187 machine.go:94] provisionDockerMachine start ...
	I0311 04:17:08.616727    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:08.616891    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:08.616897    4187 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 04:17:08.676262    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 04:17:08.676285    4187 buildroot.go:166] provisioning hostname "stopped-upgrade-629000"
	I0311 04:17:08.676348    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:08.676474    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:08.676481    4187 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-629000 && echo "stopped-upgrade-629000" | sudo tee /etc/hostname
	I0311 04:17:08.735868    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-629000
	
	I0311 04:17:08.735922    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:08.736025    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:08.736034    4187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-629000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-629000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-629000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 04:17:08.792892    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 04:17:08.792905    4187 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18350-986/.minikube CaCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18350-986/.minikube}
	I0311 04:17:08.792919    4187 buildroot.go:174] setting up certificates
	I0311 04:17:08.792924    4187 provision.go:84] configureAuth start
	I0311 04:17:08.792928    4187 provision.go:143] copyHostCerts
	I0311 04:17:08.793008    4187 exec_runner.go:144] found /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem, removing ...
	I0311 04:17:08.793014    4187 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem
	I0311 04:17:08.793126    4187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem (1082 bytes)
	I0311 04:17:08.793291    4187 exec_runner.go:144] found /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem, removing ...
	I0311 04:17:08.793295    4187 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem
	I0311 04:17:08.793351    4187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem (1123 bytes)
	I0311 04:17:08.793477    4187 exec_runner.go:144] found /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem, removing ...
	I0311 04:17:08.793480    4187 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem
	I0311 04:17:08.793525    4187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem (1675 bytes)
	I0311 04:17:08.793607    4187 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-629000 san=[127.0.0.1 localhost minikube stopped-upgrade-629000]
	I0311 04:17:08.908450    4187 provision.go:177] copyRemoteCerts
	I0311 04:17:08.908496    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 04:17:08.908505    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:17:08.938813    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 04:17:08.945602    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 04:17:08.952491    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 04:17:08.959831    4187 provision.go:87] duration metric: took 166.898958ms to configureAuth
	I0311 04:17:08.959839    4187 buildroot.go:189] setting minikube options for container-runtime
	I0311 04:17:08.959950    4187 config.go:182] Loaded profile config "stopped-upgrade-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:17:08.959996    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:08.960093    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:08.960098    4187 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0311 04:17:09.014554    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0311 04:17:09.014563    4187 buildroot.go:70] root file system type: tmpfs
	I0311 04:17:09.014624    4187 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0311 04:17:09.014670    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:09.014787    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:09.014819    4187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0311 04:17:09.845003    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:09.845072    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:09.073899    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0311 04:17:09.076965    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:09.077114    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:09.077121    4187 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0311 04:17:09.421306    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0311 04:17:09.421319    4187 machine.go:97] duration metric: took 804.661917ms to provisionDockerMachine
	I0311 04:17:09.421326    4187 start.go:293] postStartSetup for "stopped-upgrade-629000" (driver="qemu2")
	I0311 04:17:09.421332    4187 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 04:17:09.421403    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 04:17:09.421412    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:17:09.451363    4187 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 04:17:09.452777    4187 info.go:137] Remote host: Buildroot 2021.02.12
	I0311 04:17:09.452785    4187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18350-986/.minikube/addons for local assets ...
	I0311 04:17:09.452860    4187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18350-986/.minikube/files for local assets ...
	I0311 04:17:09.452968    4187 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem -> 14342.pem in /etc/ssl/certs
	I0311 04:17:09.453091    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 04:17:09.456101    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem --> /etc/ssl/certs/14342.pem (1708 bytes)
	I0311 04:17:09.463543    4187 start.go:296] duration metric: took 42.213208ms for postStartSetup
	I0311 04:17:09.463557    4187 fix.go:56] duration metric: took 20.301429709s for fixHost
	I0311 04:17:09.463591    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:09.463697    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:09.463702    4187 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 04:17:09.515973    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710155829.216220087
	
	I0311 04:17:09.515980    4187 fix.go:216] guest clock: 1710155829.216220087
	I0311 04:17:09.515984    4187 fix.go:229] Guest: 2024-03-11 04:17:09.216220087 -0700 PDT Remote: 2024-03-11 04:17:09.463558 -0700 PDT m=+20.415560376 (delta=-247.337913ms)
	I0311 04:17:09.515994    4187 fix.go:200] guest clock delta is within tolerance: -247.337913ms
	I0311 04:17:09.515996    4187 start.go:83] releasing machines lock for "stopped-upgrade-629000", held for 20.353880042s
	I0311 04:17:09.516058    4187 ssh_runner.go:195] Run: cat /version.json
	I0311 04:17:09.516069    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:17:09.516058    4187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 04:17:09.516102    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	W0311 04:17:09.516587    4187 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50310: connect: connection refused
	I0311 04:17:09.516608    4187 retry.go:31] will retry after 326.227949ms: dial tcp [::1]:50310: connect: connection refused
	W0311 04:17:09.897526    4187 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0311 04:17:09.897716    4187 ssh_runner.go:195] Run: systemctl --version
	I0311 04:17:09.902636    4187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 04:17:09.906749    4187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 04:17:09.906808    4187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0311 04:17:09.913394    4187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0311 04:17:09.922622    4187 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 04:17:09.922639    4187 start.go:494] detecting cgroup driver to use...
	I0311 04:17:09.922770    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 04:17:09.934822    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0311 04:17:09.939495    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 04:17:09.943701    4187 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 04:17:09.943744    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 04:17:09.947756    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 04:17:09.951716    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 04:17:09.955424    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 04:17:09.958912    4187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 04:17:09.962140    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 04:17:09.965149    4187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 04:17:09.968176    4187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 04:17:09.971275    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:10.039542    4187 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 04:17:10.050639    4187 start.go:494] detecting cgroup driver to use...
	I0311 04:17:10.050707    4187 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0311 04:17:10.055723    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 04:17:10.060283    4187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 04:17:10.066696    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 04:17:10.071317    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 04:17:10.075876    4187 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0311 04:17:10.141275    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 04:17:10.147107    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 04:17:10.153067    4187 ssh_runner.go:195] Run: which cri-dockerd
	I0311 04:17:10.154496    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0311 04:17:10.157195    4187 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0311 04:17:10.161993    4187 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0311 04:17:10.224421    4187 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0311 04:17:10.293386    4187 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0311 04:17:10.293452    4187 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0311 04:17:10.298748    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:10.364602    4187 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 04:17:11.509035    4187 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.144440416s)
	I0311 04:17:11.509170    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0311 04:17:11.514093    4187 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0311 04:17:11.520163    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 04:17:11.525043    4187 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0311 04:17:11.587667    4187 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0311 04:17:11.651516    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:11.715915    4187 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0311 04:17:11.722194    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 04:17:11.726422    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:11.796990    4187 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0311 04:17:11.836303    4187 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0311 04:17:11.836375    4187 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0311 04:17:11.839691    4187 start.go:562] Will wait 60s for crictl version
	I0311 04:17:11.839744    4187 ssh_runner.go:195] Run: which crictl
	I0311 04:17:11.841276    4187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 04:17:11.855644    4187 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0311 04:17:11.855714    4187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 04:17:11.871429    4187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 04:17:11.890419    4187 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0311 04:17:11.890557    4187 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0311 04:17:11.891780    4187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 04:17:11.895799    4187 kubeadm.go:877] updating cluster {Name:stopped-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50372 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0311 04:17:11.895842    4187 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 04:17:11.895881    4187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 04:17:11.906127    4187 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 04:17:11.906135    4187 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 04:17:11.906176    4187 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 04:17:11.909134    4187 ssh_runner.go:195] Run: which lz4
	I0311 04:17:11.910323    4187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 04:17:11.911430    4187 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 04:17:11.911442    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0311 04:17:12.620673    4187 docker.go:649] duration metric: took 710.403334ms to copy over tarball
	I0311 04:17:12.620725    4187 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 04:17:13.788120    4187 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.167414375s)
	I0311 04:17:13.788134    4187 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 04:17:13.803933    4187 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 04:17:13.807362    4187 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0311 04:17:13.812202    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:13.876458    4187 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 04:17:14.846063    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:14.846158    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:15.465146    4187 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.588718875s)
	I0311 04:17:15.465233    4187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 04:17:15.476013    4187 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 04:17:15.476030    4187 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 04:17:15.476036    4187 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 04:17:15.484652    4187 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:15.484791    4187 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:15.484965    4187 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:15.484982    4187 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0311 04:17:15.485052    4187 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:15.485066    4187 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:15.485562    4187 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:15.485792    4187 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:15.493920    4187 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:15.493968    4187 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0311 04:17:15.494021    4187 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:15.494230    4187 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:15.494197    4187 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:15.494303    4187 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:15.494209    4187 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:15.494807    4187 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:17.430532    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:17.471000    4187 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0311 04:17:17.471047    4187 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:17.471146    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:17.492399    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0311 04:17:17.495196    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0311 04:17:17.511371    4187 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0311 04:17:17.511395    4187 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0311 04:17:17.511455    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0311 04:17:17.522223    4187 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0311 04:17:17.522356    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:17.524526    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0311 04:17:17.524624    4187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0311 04:17:17.537234    4187 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0311 04:17:17.537254    4187 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:17.537266    4187 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0311 04:17:17.537281    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0311 04:17:17.537305    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:17.549278    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0311 04:17:17.549382    4187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0311 04:17:17.550555    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:17.551256    4187 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0311 04:17:17.551271    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0311 04:17:17.560339    4187 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0311 04:17:17.560354    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0311 04:17:17.577370    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:17.583517    4187 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0311 04:17:17.583538    4187 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:17.583586    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:17.590716    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:17.593308    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:17.608387    4187 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0311 04:17:17.611024    4187 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0311 04:17:17.611047    4187 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:17.611098    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:17.624292    4187 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0311 04:17:17.624307    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0311 04:17:17.640324    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0311 04:17:17.640322    4187 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0311 04:17:17.640342    4187 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0311 04:17:17.640359    4187 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:17.640359    4187 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:17.640406    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:17.640435    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:17.641667    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0311 04:17:17.685175    4187 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0311 04:17:17.685212    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0311 04:17:17.685224    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0311 04:17:17.933130    4187 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0311 04:17:17.933568    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:17.965831    4187 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0311 04:17:17.965870    4187 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:17.965984    4187 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:17.989359    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 04:17:17.989492    4187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0311 04:17:17.991227    4187 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0311 04:17:17.991247    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0311 04:17:18.017255    4187 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 04:17:18.017269    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0311 04:17:18.247189    4187 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 04:17:18.247239    4187 cache_images.go:92] duration metric: took 2.771274167s to LoadCachedImages
	W0311 04:17:18.247274    4187 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0311 04:17:18.247282    4187 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0311 04:17:18.247335    4187 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-629000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 04:17:18.247393    4187 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0311 04:17:18.261113    4187 cni.go:84] Creating CNI manager for ""
	I0311 04:17:18.261125    4187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:17:18.261130    4187 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 04:17:18.261140    4187 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-629000 NodeName:stopped-upgrade-629000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 04:17:18.261203    4187 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-629000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 04:17:18.261256    4187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0311 04:17:18.264263    4187 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 04:17:18.264296    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 04:17:18.267463    4187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0311 04:17:18.272628    4187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 04:17:18.277736    4187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0311 04:17:18.282706    4187 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0311 04:17:18.283921    4187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 04:17:18.287827    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:18.352398    4187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 04:17:18.358118    4187 certs.go:68] Setting up /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000 for IP: 10.0.2.15
	I0311 04:17:18.358125    4187 certs.go:194] generating shared ca certs ...
	I0311 04:17:18.358134    4187 certs.go:226] acquiring lock for ca certs: {Name:mk0eff4ed47e91bcbb09c749a04fbf8f2901eda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:17:18.358278    4187 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key
	I0311 04:17:18.358322    4187 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key
	I0311 04:17:18.358327    4187 certs.go:256] generating profile certs ...
	I0311 04:17:18.358398    4187 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.key
	I0311 04:17:18.358415    4187 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key.33256dc5
	I0311 04:17:18.358429    4187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt.33256dc5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0311 04:17:18.463977    4187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt.33256dc5 ...
	I0311 04:17:18.463995    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt.33256dc5: {Name:mk880e1d74fdec3c125cfeb3e8aa66f979538b13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:17:18.464295    4187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key.33256dc5 ...
	I0311 04:17:18.464300    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key.33256dc5: {Name:mkb0249819e3f4a19648b4a9e7b9bb2b95cec646 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:17:18.464431    4187 certs.go:381] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt.33256dc5 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt
	I0311 04:17:18.464577    4187 certs.go:385] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key.33256dc5 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key
	I0311 04:17:18.464729    4187 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/proxy-client.key
	I0311 04:17:18.464864    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434.pem (1338 bytes)
	W0311 04:17:18.464892    4187 certs.go:480] ignoring /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434_empty.pem, impossibly tiny 0 bytes
	I0311 04:17:18.464898    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 04:17:18.464916    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem (1082 bytes)
	I0311 04:17:18.464933    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem (1123 bytes)
	I0311 04:17:18.464948    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem (1675 bytes)
	I0311 04:17:18.464992    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem (1708 bytes)
	I0311 04:17:18.465318    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 04:17:18.472574    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 04:17:18.479863    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 04:17:18.487158    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 04:17:18.494274    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 04:17:18.500945    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 04:17:18.507985    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 04:17:18.515456    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 04:17:18.523491    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434.pem --> /usr/share/ca-certificates/1434.pem (1338 bytes)
	I0311 04:17:18.531706    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem --> /usr/share/ca-certificates/14342.pem (1708 bytes)
	I0311 04:17:18.539162    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 04:17:18.546477    4187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 04:17:18.551879    4187 ssh_runner.go:195] Run: openssl version
	I0311 04:17:18.554048    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14342.pem && ln -fs /usr/share/ca-certificates/14342.pem /etc/ssl/certs/14342.pem"
	I0311 04:17:18.557391    4187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14342.pem
	I0311 04:17:18.558757    4187 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 10:43 /usr/share/ca-certificates/14342.pem
	I0311 04:17:18.558778    4187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14342.pem
	I0311 04:17:18.560503    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14342.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 04:17:18.563889    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 04:17:18.567418    4187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:17:18.568952    4187 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:17:18.568982    4187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:17:18.570813    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 04:17:18.573896    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434.pem && ln -fs /usr/share/ca-certificates/1434.pem /etc/ssl/certs/1434.pem"
	I0311 04:17:18.576920    4187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434.pem
	I0311 04:17:18.578942    4187 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 10:43 /usr/share/ca-certificates/1434.pem
	I0311 04:17:18.578981    4187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434.pem
	I0311 04:17:18.580841    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1434.pem /etc/ssl/certs/51391683.0"
	I0311 04:17:18.584448    4187 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 04:17:18.586052    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 04:17:18.587897    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 04:17:18.589968    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 04:17:18.592100    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 04:17:18.594179    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 04:17:18.596124    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 04:17:18.598182    4187 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50372 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:17:18.598259    4187 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 04:17:18.609251    4187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 04:17:18.612708    4187 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 04:17:18.612716    4187 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 04:17:18.612718    4187 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 04:17:18.612742    4187 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 04:17:18.616534    4187 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:17:18.616799    4187 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-629000" does not appear in /Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:17:18.616895    4187 kubeconfig.go:62] /Users/jenkins/minikube-integration/18350-986/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-629000" cluster setting kubeconfig missing "stopped-upgrade-629000" context setting]
	I0311 04:17:18.617107    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/kubeconfig: {Name:mka1b8ea0dffa0092f34d56c8689bdb2eb2631e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:17:18.617535    4187 kapi.go:59] client config for stopped-upgrade-629000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.key", CAFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f5bfd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 04:17:18.617834    4187 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 04:17:18.620723    4187 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-629000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0311 04:17:18.620728    4187 kubeadm.go:1153] stopping kube-system containers ...
	I0311 04:17:18.620767    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 04:17:18.631287    4187 docker.go:483] Stopping containers: [a673dc823c5e fc1103117f22 2edd01543dcf 870860a04f07 cda83ca956bb 47ea3d48656f e28e02ee3daa 0ff9bfcb7135]
	I0311 04:17:18.631358    4187 ssh_runner.go:195] Run: docker stop a673dc823c5e fc1103117f22 2edd01543dcf 870860a04f07 cda83ca956bb 47ea3d48656f e28e02ee3daa 0ff9bfcb7135
	I0311 04:17:18.650245    4187 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 04:17:18.655132    4187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 04:17:18.658019    4187 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 04:17:18.658025    4187 kubeadm.go:156] found existing configuration files:
	
	I0311 04:17:18.658047    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/admin.conf
	I0311 04:17:18.660433    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 04:17:18.660456    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 04:17:18.663440    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/kubelet.conf
	I0311 04:17:18.666106    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 04:17:18.666135    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 04:17:18.668550    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/controller-manager.conf
	I0311 04:17:18.671481    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 04:17:18.671502    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 04:17:18.674377    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/scheduler.conf
	I0311 04:17:18.676756    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 04:17:18.676777    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 04:17:18.679918    4187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 04:17:18.682792    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:18.706579    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:19.846914    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:19.846971    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:19.349298    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:19.463524    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:19.485999    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:19.523466    4187 api_server.go:52] waiting for apiserver process to appear ...
	I0311 04:17:19.523552    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:17:20.025584    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:17:20.525556    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:17:20.529732    4187 api_server.go:72] duration metric: took 1.006297833s to wait for apiserver process to appear ...
	I0311 04:17:20.529741    4187 api_server.go:88] waiting for apiserver healthz status ...
	I0311 04:17:20.529749    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:24.848261    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:24.848359    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:25.531660    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:25.531681    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:29.850154    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:29.850197    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:30.531748    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:30.531795    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:34.852147    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:34.852184    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:35.532023    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:35.532064    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:39.852353    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:39.852393    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:40.532481    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:40.532561    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:44.854514    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:44.854570    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:45.533720    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:45.533786    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:49.856807    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:49.856973    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:17:49.872142    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:17:49.872221    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:17:49.884500    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:17:49.884571    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:17:49.895827    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:17:49.895899    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:17:49.915948    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:17:49.916010    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:17:49.926014    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:17:49.926073    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:17:49.936615    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:17:49.936688    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:17:49.950808    4133 logs.go:276] 0 containers: []
	W0311 04:17:49.950818    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:17:49.950871    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:17:49.961144    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:17:49.961166    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:17:49.961172    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:17:49.975910    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:17:49.975922    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:17:49.993905    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:17:49.993920    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:17:50.005541    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:17:50.005554    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:17:50.017450    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:17:50.017463    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:17:50.022032    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:17:50.022041    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:17:50.059031    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:17:50.059042    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:17:50.074226    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:17:50.074240    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:17:50.115423    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:17:50.115434    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:17:50.126919    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:17:50.126933    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:17:50.138404    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:17:50.138413    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:17:50.163370    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:17:50.163377    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:17:50.242297    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:17:50.242311    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:17:50.256881    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:17:50.256895    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:17:50.268228    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:17:50.268239    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:17:50.284358    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:17:50.284373    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:17:50.300516    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:17:50.300526    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:17:50.534690    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:50.534739    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:52.818325    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:55.535888    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:55.536022    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:57.820655    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:57.821130    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:17:57.863279    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:17:57.863417    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:17:57.900792    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:17:57.900873    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:17:57.920874    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:17:57.920936    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:17:57.931317    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:17:57.931388    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:17:57.941673    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:17:57.941749    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:17:57.952447    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:17:57.952523    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:17:57.962469    4133 logs.go:276] 0 containers: []
	W0311 04:17:57.962479    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:17:57.962532    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:17:57.973131    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:17:57.973148    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:17:57.973153    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:17:57.978062    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:17:57.978071    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:17:57.995499    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:17:57.995516    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:17:58.010108    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:17:58.010119    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:17:58.027698    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:17:58.027708    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:17:58.039171    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:17:58.039181    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:17:58.051433    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:17:58.051444    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:17:58.062786    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:17:58.062797    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:17:58.099414    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:17:58.099423    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:17:58.137733    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:17:58.137743    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:17:58.176580    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:17:58.176591    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:17:58.203088    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:17:58.203095    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:17:58.214586    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:17:58.214598    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:17:58.238491    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:17:58.238503    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:17:58.251021    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:17:58.251032    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:17:58.265601    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:17:58.265615    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:17:58.280669    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:17:58.280681    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:00.794042    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:00.537829    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:00.537899    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:05.796601    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:05.797015    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:05.835606    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:05.835745    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:05.858619    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:05.858740    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:05.873775    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:05.873849    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:05.888545    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:05.888628    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:05.901155    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:05.901253    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:05.913089    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:05.913164    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:05.923659    4133 logs.go:276] 0 containers: []
	W0311 04:18:05.923671    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:05.923734    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:05.940129    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:05.940148    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:05.940154    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:05.979380    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:05.979391    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:05.984365    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:05.984373    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:05.996104    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:05.996116    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:06.009343    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:06.009358    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:05.538696    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:05.538770    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:06.024628    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:06.024864    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:06.040090    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:06.040103    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:06.052950    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:06.052959    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:06.069934    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:06.069945    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:06.084405    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:06.084419    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:06.109827    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:06.109834    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:06.150739    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:06.150751    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:06.165385    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:06.165396    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:06.203901    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:06.203910    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:06.218807    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:06.218818    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:06.230247    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:06.230259    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:06.247476    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:06.247489    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:08.759495    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:10.541234    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:10.541375    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:13.762164    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:13.762568    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:13.796698    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:13.796829    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:13.816183    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:13.816294    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:13.830032    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:13.830114    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:13.841991    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:13.842068    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:13.852455    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:13.852528    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:13.863065    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:13.863131    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:13.873744    4133 logs.go:276] 0 containers: []
	W0311 04:18:13.873758    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:13.873818    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:13.884890    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:13.884908    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:13.884913    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:13.905517    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:13.905528    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:13.926675    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:13.926687    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:13.941151    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:13.941163    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:13.980251    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:13.980260    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:14.017136    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:14.017147    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:14.031389    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:14.031401    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:14.043111    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:14.043122    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:14.056850    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:14.056860    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:14.061192    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:14.061198    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:14.101101    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:14.101111    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:14.115289    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:14.115299    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:14.136748    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:14.136761    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:14.148895    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:14.148904    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:14.175001    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:14.175009    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:14.188709    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:14.188719    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:14.200592    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:14.200603    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:15.543853    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:15.543912    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:16.714551    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:20.545240    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:20.545437    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:20.562758    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:20.562844    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:20.576180    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:20.576264    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:20.587599    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:20.587675    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:20.598295    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:20.598375    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:20.608360    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:20.608429    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:20.618427    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:20.618491    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:20.628528    4187 logs.go:276] 0 containers: []
	W0311 04:18:20.628539    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:20.628596    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:20.638958    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:20.638975    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:20.638993    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:20.656495    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:20.656505    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:20.696159    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:20.696167    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:20.714337    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:20.714346    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:20.726491    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:20.726500    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:20.737544    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:20.737555    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:20.750084    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:20.750096    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:20.767584    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:20.767594    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:20.779028    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:20.779042    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:20.783551    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:20.783558    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:20.866691    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:20.866705    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:20.881749    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:20.881766    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:20.926928    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:20.926939    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:20.942731    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:20.942742    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:20.954945    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:20.954957    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:20.978944    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:20.978951    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:20.993392    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:20.993402    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:23.506823    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:21.716546    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:21.716696    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:21.728109    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:21.728186    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:21.738534    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:21.738601    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:21.749588    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:21.749659    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:21.760404    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:21.760475    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:21.770674    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:21.770741    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:21.784269    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:21.784345    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:21.794594    4133 logs.go:276] 0 containers: []
	W0311 04:18:21.794606    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:21.794662    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:21.805187    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:21.805203    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:21.805209    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:21.843491    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:21.843506    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:21.855476    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:21.855489    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:21.867170    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:21.867184    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:21.882419    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:21.882433    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:21.894328    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:21.894339    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:21.905674    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:21.905689    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:21.910331    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:21.910339    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:21.947308    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:21.947321    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:21.961305    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:21.961320    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:21.979223    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:21.979234    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:21.991433    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:21.991444    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:22.005863    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:22.005874    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:22.022726    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:22.022737    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:22.047546    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:22.047559    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:22.092261    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:22.092273    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:22.106121    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:22.106132    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:24.622833    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:28.509038    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:28.509185    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:28.523677    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:28.523776    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:28.535961    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:28.536045    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:28.547244    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:28.547318    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:28.557690    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:28.557764    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:28.568395    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:28.568467    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:28.579095    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:28.579184    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:28.589466    4187 logs.go:276] 0 containers: []
	W0311 04:18:28.589477    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:28.589534    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:28.600007    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:28.600025    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:28.600030    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:28.614070    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:28.614081    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:28.629259    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:28.629270    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:28.641569    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:28.641581    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:28.680889    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:28.680902    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:28.698078    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:28.698087    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:28.710176    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:28.710192    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:28.736314    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:28.736328    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:28.775327    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:28.775343    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:28.792740    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:28.792753    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:28.804731    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:28.804741    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:28.808822    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:28.808831    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:28.823267    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:28.823279    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:28.836582    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:28.836596    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:28.855086    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:28.855096    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:28.866780    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:28.866789    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:28.903852    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:28.903864    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:29.625090    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:29.625336    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:29.645954    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:29.646063    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:29.661452    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:29.661543    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:29.674232    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:29.674307    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:29.685428    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:29.685495    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:29.696362    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:29.696430    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:29.707443    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:29.707504    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:29.718328    4133 logs.go:276] 0 containers: []
	W0311 04:18:29.718342    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:29.718408    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:29.730353    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:29.730369    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:29.730375    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:29.767164    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:29.767178    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:29.788747    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:29.788759    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:29.802948    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:29.802961    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:29.817212    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:29.817227    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:29.835141    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:29.835151    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:29.847103    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:29.847114    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:29.872782    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:29.872790    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:29.885502    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:29.885514    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:29.923082    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:29.923094    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:29.941754    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:29.941765    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:29.961694    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:29.961705    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:29.977016    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:29.977024    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:29.996259    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:29.996270    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:30.000862    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:30.000868    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:30.039697    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:30.039713    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:30.052360    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:30.052372    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:31.421573    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:32.577553    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:36.422721    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:36.422891    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:36.437598    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:36.437669    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:36.452205    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:36.452273    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:36.463188    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:36.463255    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:36.473651    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:36.473723    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:36.484769    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:36.484837    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:36.501260    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:36.501348    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:36.512056    4187 logs.go:276] 0 containers: []
	W0311 04:18:36.512067    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:36.512121    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:36.523222    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:36.523245    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:36.523250    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:36.535265    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:36.535274    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:36.570845    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:36.570859    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:36.585443    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:36.585454    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:36.599705    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:36.599715    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:36.613860    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:36.613875    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:36.625502    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:36.625511    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:36.641499    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:36.641509    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:36.646397    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:36.646404    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:36.661423    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:36.661433    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:36.679401    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:36.679412    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:36.692943    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:36.692954    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:36.716395    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:36.716403    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:36.728140    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:36.728150    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:36.769444    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:36.769455    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:36.786255    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:36.786266    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:36.797944    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:36.797954    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:37.579892    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:37.580039    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:37.594827    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:37.594913    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:37.607001    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:37.607072    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:37.617347    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:37.617422    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:37.628210    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:37.628279    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:37.638818    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:37.638878    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:37.649962    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:37.650034    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:37.660515    4133 logs.go:276] 0 containers: []
	W0311 04:18:37.660527    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:37.660585    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:37.670767    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:37.670786    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:37.670792    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:37.675055    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:37.675061    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:37.716533    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:37.716544    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:37.733408    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:37.733419    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:37.748157    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:37.748168    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:37.774682    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:37.774692    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:37.787149    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:37.787160    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:37.825485    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:37.825497    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:37.861924    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:37.861956    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:37.876418    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:37.876431    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:37.888264    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:37.888276    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:37.900405    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:37.900417    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:37.915084    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:37.915094    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:37.931596    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:37.931613    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:37.950045    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:37.950058    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:37.965512    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:37.965524    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:37.983941    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:37.983952    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:40.498159    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:39.338715    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:45.500360    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:45.500567    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:45.520193    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:45.520262    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:45.532298    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:45.532359    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:45.544740    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:45.544816    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:45.557700    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:45.557772    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:45.570293    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:45.570357    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:45.583563    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:45.583635    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:45.595672    4133 logs.go:276] 0 containers: []
	W0311 04:18:45.595683    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:45.595736    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:45.611227    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:45.611245    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:45.611249    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:45.629382    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:45.629396    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:45.643381    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:45.643393    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:45.656692    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:45.656705    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:45.680117    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:45.680133    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:45.696184    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:45.696196    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:45.709834    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:45.709851    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:45.747541    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:45.747554    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:45.763256    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:45.763268    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:45.780332    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:45.780345    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:45.793846    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:45.793857    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:45.808886    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:45.808898    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:45.848724    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:45.848737    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:45.860914    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:45.860928    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:45.873658    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:45.873670    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:45.898053    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:45.898066    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:45.934119    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:45.934130    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:44.341132    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:44.341366    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:44.365647    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:44.365802    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:44.382001    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:44.382091    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:44.394920    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:44.394998    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:44.406512    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:44.406598    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:44.417418    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:44.417487    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:44.428248    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:44.428315    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:44.438625    4187 logs.go:276] 0 containers: []
	W0311 04:18:44.438638    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:44.438698    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:44.448995    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:44.449015    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:44.449021    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:44.460956    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:44.460971    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:44.478768    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:44.478783    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:44.518119    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:44.518126    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:44.534630    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:44.534640    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:44.553274    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:44.553286    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:44.567661    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:44.567672    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:44.582038    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:44.582052    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:44.595157    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:44.595171    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:44.599302    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:44.599308    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:44.613029    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:44.613040    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:44.625252    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:44.625263    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:44.661539    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:44.661549    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:44.701592    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:44.701609    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:44.713083    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:44.713094    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:44.724411    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:44.724420    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:44.736370    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:44.736381    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:47.262981    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:48.440345    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:52.264896    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:52.265091    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:52.286223    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:52.286320    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:52.301361    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:52.301435    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:52.316010    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:52.316083    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:52.327362    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:52.327437    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:52.338010    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:52.338085    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:52.349097    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:52.349167    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:52.359345    4187 logs.go:276] 0 containers: []
	W0311 04:18:52.359360    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:52.359419    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:52.369802    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:52.369826    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:52.369831    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:52.387439    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:52.387450    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:52.403645    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:52.403666    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:52.415396    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:52.415408    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:52.454222    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:52.454231    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:52.458716    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:52.458724    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:52.495937    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:52.495949    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:52.507157    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:52.507167    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:52.518828    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:52.518837    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:52.530751    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:52.530763    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:52.544764    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:52.544777    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:52.556018    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:52.556032    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:52.570505    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:52.570519    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:52.585136    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:52.585150    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:52.623578    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:52.623590    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:52.642862    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:52.642876    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:52.654745    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:52.654757    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:53.441219    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:53.441340    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:53.453672    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:18:53.453749    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:53.469641    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:18:53.469714    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:53.480401    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:18:53.480461    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:53.491868    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:18:53.491936    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:53.503708    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:18:53.503781    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:53.514925    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:18:53.514990    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:53.525642    4133 logs.go:276] 0 containers: []
	W0311 04:18:53.525653    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:53.525706    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:53.536689    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:18:53.536707    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:53.536713    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:53.541494    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:18:53.541504    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:18:53.558062    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:18:53.558076    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:18:53.571603    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:18:53.571615    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:18:53.583914    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:18:53.583928    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:18:53.596990    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:53.597003    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:53.620994    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:18:53.621004    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:53.633222    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:18:53.633237    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:18:53.647836    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:18:53.647851    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:18:53.662348    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:18:53.662358    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:18:53.685755    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:18:53.685769    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:18:53.701876    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:53.701886    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:53.738106    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:18:53.738116    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:18:53.778693    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:18:53.778708    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:18:53.794506    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:18:53.794520    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:18:53.807437    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:18:53.807450    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:18:53.822699    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:53.822710    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:55.181805    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:56.361645    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:00.184083    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:00.184479    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:00.217202    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:00.217353    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:00.237069    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:00.237163    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:00.251242    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:00.251318    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:00.262703    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:00.262783    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:00.273081    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:00.273151    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:00.283701    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:00.283777    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:00.293986    4187 logs.go:276] 0 containers: []
	W0311 04:19:00.293997    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:00.294054    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:00.305015    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:00.305032    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:00.305038    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:00.316750    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:00.316758    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:00.360430    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:00.360443    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:00.374536    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:00.374547    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:00.388277    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:00.388290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:00.403996    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:00.404010    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:00.408750    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:00.408760    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:00.443533    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:00.443548    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:00.455751    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:00.455762    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:00.468996    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:00.469007    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:00.479997    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:00.480010    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:00.505167    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:00.505181    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:00.519470    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:00.519480    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:00.559089    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:00.559098    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:00.572821    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:00.572833    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:00.587146    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:00.587156    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:00.598940    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:00.598952    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:03.116516    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:01.363946    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:01.364273    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:01.406764    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:01.406895    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:01.446873    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:01.446954    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:01.459414    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:01.459490    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:01.471058    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:01.471132    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:01.482770    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:01.482842    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:01.493753    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:01.493818    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:01.505226    4133 logs.go:276] 0 containers: []
	W0311 04:19:01.505238    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:01.505300    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:01.516608    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:01.516628    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:01.516634    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:01.529001    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:01.529012    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:01.553903    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:01.553922    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:01.568785    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:01.568796    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:01.581645    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:01.581659    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:01.595862    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:01.595873    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:01.613202    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:01.613213    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:01.628462    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:01.628472    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:01.633300    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:01.633309    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:01.671389    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:01.671403    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:01.709716    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:01.709728    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:01.724415    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:01.724427    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:01.736949    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:01.736960    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:01.749252    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:01.749261    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:01.763151    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:01.763164    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:01.802242    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:01.802255    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:01.817258    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:01.817268    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:04.335027    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:08.119160    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:08.119515    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:08.157851    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:08.158013    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:08.176982    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:08.177078    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:08.190862    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:08.190940    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:08.202944    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:08.203014    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:08.213307    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:08.213364    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:08.223768    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:08.223828    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:08.233508    4187 logs.go:276] 0 containers: []
	W0311 04:19:08.233517    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:08.233568    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:08.244232    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:08.244252    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:08.244257    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:08.284003    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:08.284016    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:08.298151    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:08.298162    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:08.309985    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:08.309996    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:08.321789    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:08.321800    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:08.334456    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:08.334470    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:08.348973    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:08.348985    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:08.366250    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:08.366263    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:08.380071    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:08.380083    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:08.404940    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:08.404952    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:08.409419    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:08.409427    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:08.420639    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:08.420650    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:08.435960    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:08.435976    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:08.448883    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:08.448895    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:08.489449    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:08.489464    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:08.535458    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:08.535469    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:08.549771    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:08.549785    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:09.337619    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:09.338014    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:09.378186    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:09.378324    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:09.399836    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:09.399943    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:09.415175    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:09.415250    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:09.427387    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:09.427466    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:09.438216    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:09.438288    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:09.448767    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:09.448836    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:09.469759    4133 logs.go:276] 0 containers: []
	W0311 04:19:09.469774    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:09.469830    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:09.481209    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:09.481229    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:09.481235    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:09.493029    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:09.493040    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:09.531668    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:09.531678    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:09.536032    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:09.536041    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:09.550784    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:09.550796    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:09.565960    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:09.565976    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:09.583879    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:09.583891    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:09.602089    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:09.602099    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:09.613926    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:09.613935    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:09.647926    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:09.647938    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:09.661823    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:09.661836    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:09.672986    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:09.672999    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:09.699337    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:09.699357    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:09.741938    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:09.741949    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:09.759842    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:09.759854    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:09.771204    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:09.771216    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:09.782945    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:09.782955    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:11.063404    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:12.300078    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:16.063873    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:16.064188    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:16.096357    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:16.096481    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:16.114852    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:16.114948    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:16.128738    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:16.128822    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:16.144014    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:16.144087    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:16.154578    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:16.154647    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:16.165152    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:16.165218    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:16.175304    4187 logs.go:276] 0 containers: []
	W0311 04:19:16.175316    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:16.175377    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:16.185737    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:16.185755    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:16.185761    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:16.222843    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:16.222850    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:16.236453    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:16.236463    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:16.247751    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:16.247761    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:16.259413    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:16.259425    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:16.263521    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:16.263527    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:16.299446    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:16.299459    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:16.313923    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:16.313934    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:16.326782    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:16.326791    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:16.339156    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:16.339168    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:16.356601    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:16.356611    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:16.381059    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:16.381067    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:16.419300    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:16.419311    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:16.436073    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:16.436085    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:16.447661    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:16.447674    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:16.462596    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:16.462610    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:16.476206    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:16.476220    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:18.990583    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:17.302663    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:17.302928    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:17.335463    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:17.335571    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:17.352419    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:17.352500    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:17.364805    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:17.364872    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:17.380984    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:17.381063    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:17.391653    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:17.391720    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:17.402573    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:17.402643    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:17.412721    4133 logs.go:276] 0 containers: []
	W0311 04:19:17.412731    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:17.412787    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:17.423520    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:17.423538    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:17.423544    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:17.435528    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:17.435539    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:17.451356    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:17.451368    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:17.462948    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:17.462958    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:17.487104    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:17.487112    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:17.499089    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:17.499105    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:17.514014    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:17.514027    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:17.525600    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:17.525611    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:17.545293    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:17.545306    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:17.582024    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:17.582038    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:17.619090    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:17.619101    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:17.634226    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:17.634240    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:17.653858    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:17.653867    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:17.658143    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:17.658149    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:17.692245    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:17.692260    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:17.707412    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:17.707426    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:17.721407    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:17.721420    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:20.239874    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:23.992700    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:23.992931    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:24.012395    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:24.012488    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:24.025795    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:24.025863    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:24.038186    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:24.038257    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:24.049161    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:24.049232    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:24.060431    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:24.060491    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:25.242066    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:25.242384    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:25.272923    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:25.273048    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:25.294405    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:25.294493    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:25.307524    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:25.307600    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:25.319255    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:25.319329    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:25.330909    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:25.330981    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:25.342079    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:25.342147    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:25.352718    4133 logs.go:276] 0 containers: []
	W0311 04:19:25.352732    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:25.352792    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:25.363587    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:25.363604    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:25.363610    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:25.378595    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:25.378608    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:25.401977    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:25.401987    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:25.413602    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:25.413613    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:25.426931    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:25.426944    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:25.441667    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:25.441678    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:25.452509    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:25.452521    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:25.470179    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:25.470192    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:25.481384    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:25.481394    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:25.506581    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:25.506592    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:25.511205    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:25.511211    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:25.551843    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:25.551855    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:25.567652    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:25.567663    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:25.607125    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:25.607146    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:25.647247    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:25.647258    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:25.658794    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:25.658810    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:25.670332    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:25.670342    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:24.071206    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:24.073123    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:24.087467    4187 logs.go:276] 0 containers: []
	W0311 04:19:24.087479    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:24.087539    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:24.097592    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:24.097611    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:24.097616    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:24.111449    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:24.111460    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:24.122824    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:24.122835    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:24.134305    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:24.134314    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:24.173560    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:24.173569    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:24.190980    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:24.190989    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:24.202887    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:24.202898    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:24.226954    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:24.226961    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:24.263146    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:24.263157    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:24.277136    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:24.277147    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:24.291494    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:24.291505    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:24.308212    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:24.308221    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:24.320059    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:24.320071    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:24.324023    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:24.324030    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:24.363153    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:24.363165    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:24.374866    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:24.374876    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:24.385943    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:24.385957    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:26.902343    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:28.183978    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:31.904896    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:31.905217    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:31.933279    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:31.933408    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:31.952402    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:31.952498    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:31.965969    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:31.966044    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:31.977305    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:31.977375    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:31.987898    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:31.987962    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:31.998603    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:31.998677    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:32.008990    4187 logs.go:276] 0 containers: []
	W0311 04:19:32.009001    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:32.009060    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:32.019221    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:32.019238    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:32.019243    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:32.031040    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:32.031051    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:32.035170    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:32.035177    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:32.078165    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:32.078179    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:32.124243    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:32.124253    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:32.138206    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:32.138219    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:32.155537    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:32.155548    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:32.166685    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:32.166696    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:32.178472    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:32.178486    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:32.214724    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:32.214736    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:32.235282    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:32.235295    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:32.246865    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:32.246879    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:32.260280    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:32.260289    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:32.272191    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:32.272202    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:32.288799    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:32.288810    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:32.301860    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:32.301872    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:32.326220    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:32.326229    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:33.184706    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:33.184947    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:33.209219    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:33.209334    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:33.224956    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:33.225042    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:33.239874    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:33.239944    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:33.254204    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:33.254269    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:33.264584    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:33.264646    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:33.275381    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:33.275446    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:33.286020    4133 logs.go:276] 0 containers: []
	W0311 04:19:33.286033    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:33.286085    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:33.296608    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:33.296629    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:33.296637    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:33.331731    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:33.331744    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:33.369887    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:33.369899    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:33.381272    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:33.381285    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:33.417903    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:33.417917    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:33.447841    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:33.447852    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:33.486010    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:33.486021    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:33.511654    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:33.511665    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:33.516722    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:33.516731    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:33.531194    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:33.531203    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:33.545300    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:33.545313    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:33.557678    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:33.557688    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:33.572876    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:33.572888    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:33.587399    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:33.587412    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:33.599550    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:33.599564    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:33.612023    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:33.612040    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:33.631312    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:33.631333    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:34.842107    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:36.144990    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:39.843186    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:39.843359    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:39.856260    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:39.856332    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:39.869074    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:39.869142    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:39.879930    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:39.879992    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:39.894363    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:39.894425    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:39.904426    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:39.904494    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:39.914953    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:39.915026    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:39.925165    4187 logs.go:276] 0 containers: []
	W0311 04:19:39.925179    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:39.925231    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:39.936113    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:39.936139    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:39.936145    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:39.940427    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:39.940437    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:39.975303    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:39.975314    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:39.989678    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:39.989689    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:40.001133    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:40.001146    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:40.012421    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:40.012434    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:40.035887    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:40.035894    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:40.072904    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:40.072918    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:40.110159    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:40.110171    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:40.121475    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:40.121488    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:40.135570    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:40.135583    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:40.153514    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:40.153525    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:40.164998    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:40.165009    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:40.179536    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:40.179546    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:40.191372    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:40.191384    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:40.204337    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:40.204349    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:40.216183    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:40.216195    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:42.732567    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:41.147158    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:41.147516    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:41.181595    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:41.181740    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:41.202484    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:41.202584    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:41.219365    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:41.219441    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:41.233292    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:41.233359    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:41.243814    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:41.243884    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:41.254335    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:41.254403    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:41.264102    4133 logs.go:276] 0 containers: []
	W0311 04:19:41.264112    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:41.264170    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:41.274843    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:41.274860    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:41.274865    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:41.286299    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:41.286312    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:41.320185    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:41.320198    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:41.334562    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:41.334575    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:41.349497    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:41.349507    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:41.372550    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:41.372562    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:41.387819    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:41.387835    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:41.426439    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:41.426450    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:41.463330    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:41.463346    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:41.478128    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:41.478139    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:41.489959    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:41.489970    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:41.502511    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:41.502521    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:41.526090    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:41.526101    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:41.530274    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:41.530281    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:41.543959    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:41.543973    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:41.559172    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:41.559187    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:41.570508    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:41.570520    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:44.087436    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:47.734690    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:47.734838    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:47.751366    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:47.751454    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:47.764308    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:47.764381    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:47.774925    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:47.774996    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:47.785625    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:47.785706    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:47.795861    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:47.795923    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:47.806157    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:47.806229    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:47.816488    4187 logs.go:276] 0 containers: []
	W0311 04:19:47.816499    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:47.816580    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:47.829816    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:47.829831    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:47.829836    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:47.842430    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:47.842445    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:47.854348    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:47.854359    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:47.871357    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:47.871369    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:47.883350    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:47.883359    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:47.896727    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:47.896738    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:47.921474    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:47.921481    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:47.925621    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:47.925627    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:47.963567    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:47.963576    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:47.978501    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:47.978512    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:48.017491    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:48.017499    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:48.054483    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:48.054494    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:48.066216    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:48.066228    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:48.085715    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:48.085730    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:48.097154    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:48.097165    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:48.108095    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:48.108106    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:48.122053    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:48.122065    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:49.088510    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:49.088612    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:49.101746    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:49.101831    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:49.112539    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:49.112604    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:49.122853    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:49.122925    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:49.133241    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:49.133315    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:49.143138    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:49.143203    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:49.154082    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:49.154156    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:49.163848    4133 logs.go:276] 0 containers: []
	W0311 04:19:49.163859    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:49.163915    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:49.174281    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:49.174297    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:49.174302    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:49.188580    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:49.188594    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:49.200579    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:49.200589    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:49.218022    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:49.218031    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:49.229693    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:49.229703    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:49.267404    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:49.267418    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:49.304423    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:49.304436    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:49.319756    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:49.319767    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:49.331401    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:49.331411    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:49.343303    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:49.343314    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:49.358232    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:49.358243    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:49.376719    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:49.376731    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:49.401018    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:49.401026    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:49.405241    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:49.405249    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:49.439806    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:49.439815    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:49.457963    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:49.457977    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:49.472100    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:49.472108    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:50.638275    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:51.988557    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:55.640775    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:55.641171    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:55.674470    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:55.674602    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:55.693465    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:55.693566    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:55.710054    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:55.710126    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:55.722610    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:55.722693    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:55.733377    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:55.733448    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:55.744079    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:55.744151    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:55.754421    4187 logs.go:276] 0 containers: []
	W0311 04:19:55.754431    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:55.754485    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:55.771586    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:55.771603    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:55.771610    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:55.782826    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:55.782837    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:55.803774    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:55.803791    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:55.822008    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:55.822020    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:55.836249    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:55.836264    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:55.849933    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:55.849944    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:55.861282    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:55.861292    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:55.872539    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:55.872548    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:55.909486    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:55.909495    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:55.913532    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:55.913537    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:55.950648    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:55.950660    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:55.969825    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:55.969837    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:55.984256    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:55.984269    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:55.998272    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:55.998285    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:56.009792    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:56.009805    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:56.033382    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:56.033389    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:56.071604    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:56.071616    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:58.588295    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:56.990325    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:56.990455    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:57.007851    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:19:57.007931    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:57.019308    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:19:57.019381    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:57.029169    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:19:57.029237    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:57.039702    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:19:57.039778    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:57.061939    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:19:57.062019    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:57.075417    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:19:57.075482    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:57.086513    4133 logs.go:276] 0 containers: []
	W0311 04:19:57.086529    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:57.086591    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:57.101577    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:19:57.101596    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:19:57.101603    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:19:57.120330    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:19:57.120344    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:19:57.131818    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:19:57.131830    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:19:57.169725    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:19:57.169738    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:19:57.183460    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:19:57.183474    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:19:57.197767    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:19:57.197778    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:19:57.209184    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:19:57.209197    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:19:57.220125    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:57.220137    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:57.244168    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:19:57.244176    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:57.256437    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:57.256449    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:57.261069    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:19:57.261078    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:19:57.275609    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:19:57.275621    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:19:57.286969    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:19:57.286979    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:19:57.305700    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:19:57.305710    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:19:57.320981    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:19:57.320992    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:19:57.334917    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:57.334931    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:57.369413    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:57.369427    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:59.909408    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:03.590515    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:03.590678    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:03.611660    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:03.611759    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:03.627492    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:03.627571    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:03.655873    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:03.655947    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:03.668150    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:03.668280    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:03.685013    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:03.685078    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:03.696537    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:03.696614    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:03.707073    4187 logs.go:276] 0 containers: []
	W0311 04:20:03.707083    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:03.707136    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:03.717947    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:03.717966    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:03.717973    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:03.729995    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:03.730006    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:03.734617    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:03.734627    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:03.770503    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:03.770517    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:03.785661    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:03.785670    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:03.797592    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:03.797603    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:03.812232    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:03.812244    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:03.851072    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:03.851087    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:03.864985    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:03.864996    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:03.877872    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:03.877881    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:03.889748    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:03.889760    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:03.913176    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:03.913185    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:03.953051    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:03.953062    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:03.966922    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:03.966932    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:03.979071    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:03.979083    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:03.996276    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:03.996290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:04.008899    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:04.008914    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:04.911583    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:04.911719    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:04.932040    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:04.932153    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:04.946581    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:04.946656    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:04.958986    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:04.959058    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:04.969742    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:04.969819    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:04.980121    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:04.980213    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:04.991073    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:04.991140    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:05.001277    4133 logs.go:276] 0 containers: []
	W0311 04:20:05.001289    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:05.001344    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:05.012176    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:05.012193    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:05.012198    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:05.049241    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:05.049254    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:05.061434    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:05.061447    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:05.073091    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:05.073103    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:05.087238    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:05.087251    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:05.101387    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:05.101399    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:05.112949    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:05.112959    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:05.127827    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:05.127838    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:05.139654    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:05.139664    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:05.177841    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:05.177849    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:05.181916    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:05.181924    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:05.218613    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:05.218624    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:05.234515    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:05.234529    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:05.245589    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:05.245599    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:05.260312    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:05.260325    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:05.271495    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:05.271507    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:05.288319    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:05.288331    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:06.521108    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:07.813623    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:11.523209    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:11.523388    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:11.548253    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:11.548344    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:11.562998    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:11.563081    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:11.574193    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:11.574253    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:11.584567    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:11.584640    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:11.595047    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:11.595127    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:11.605750    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:11.605812    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:11.615541    4187 logs.go:276] 0 containers: []
	W0311 04:20:11.615551    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:11.615600    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:11.625903    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:11.625921    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:11.625926    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:11.660733    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:11.660744    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:11.672279    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:11.672290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:11.689147    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:11.689157    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:11.701235    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:11.701248    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:11.712815    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:11.712826    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:11.726258    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:11.726268    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:11.740209    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:11.740219    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:11.752219    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:11.752233    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:11.768074    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:11.768088    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:11.772349    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:11.772356    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:11.788664    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:11.788673    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:11.826360    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:11.826370    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:11.838138    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:11.838149    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:11.877426    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:11.877446    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:11.892910    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:11.892923    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:11.907647    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:11.907658    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:12.816301    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:12.816798    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:12.852503    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:12.852641    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:12.873227    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:12.873342    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:12.888417    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:12.888499    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:12.907595    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:12.907668    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:12.920850    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:12.920919    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:12.931550    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:12.931611    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:12.942336    4133 logs.go:276] 0 containers: []
	W0311 04:20:12.942347    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:12.942406    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:12.953024    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:12.953042    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:12.953048    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:12.968751    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:12.968767    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:12.984523    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:12.984533    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:13.019714    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:13.019729    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:13.033826    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:13.033838    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:13.046425    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:13.046435    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:13.060904    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:13.060914    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:13.097487    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:13.097498    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:13.135557    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:13.135570    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:13.148082    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:13.148095    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:13.161221    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:13.161233    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:13.172793    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:13.172826    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:13.196795    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:13.196802    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:13.208218    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:13.208227    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:13.212828    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:13.212835    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:13.226118    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:13.226130    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:13.237117    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:13.237128    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:15.757889    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:14.430445    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:20.760454    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:20.760796    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:20.791166    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:20.791300    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:20.811183    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:20.811281    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:20.825444    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:20.825529    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:20.837529    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:20.837600    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:20.848075    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:20.848151    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:20.858836    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:20.858908    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:20.869054    4133 logs.go:276] 0 containers: []
	W0311 04:20:20.869065    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:20.869120    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:20.879618    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:20.879637    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:20.879642    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:20.891521    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:20.891532    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:20.926380    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:20.926394    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:20.940479    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:20.940492    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:20.981757    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:20.981776    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:20.996740    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:20.996753    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:21.009030    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:21.009044    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:19.432771    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:19.433177    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:19.468123    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:19.468266    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:19.487696    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:19.487797    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:19.506813    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:19.506893    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:19.520149    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:19.520223    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:19.530504    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:19.530571    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:19.544743    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:19.544819    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:19.558906    4187 logs.go:276] 0 containers: []
	W0311 04:20:19.558921    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:19.558980    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:19.570284    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:19.570307    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:19.570314    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:19.581485    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:19.581497    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:19.603884    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:19.603892    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:19.639970    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:19.639977    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:19.660600    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:19.660610    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:19.671980    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:19.671992    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:19.684433    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:19.684447    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:19.709069    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:19.709081    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:19.744433    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:19.744446    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:19.782785    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:19.782796    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:19.797785    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:19.797796    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:19.813881    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:19.813890    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:19.818740    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:19.818750    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:19.835381    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:19.835394    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:19.846939    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:19.846950    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:19.866522    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:19.866533    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:19.881024    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:19.881035    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:22.393735    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:21.024555    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:21.024567    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:21.040260    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:21.040271    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:21.081917    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:21.081928    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:21.093555    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:21.093566    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:21.116474    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:21.116485    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:21.128107    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:21.128117    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:21.140036    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:21.140048    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:21.152251    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:21.152263    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:21.156910    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:21.156917    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:21.179315    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:21.179327    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:23.695955    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:27.396043    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:27.396359    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:27.426005    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:27.426122    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:27.445357    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:27.445454    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:27.460224    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:27.460305    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:27.472678    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:27.472748    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:27.483128    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:27.483192    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:27.494060    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:27.494138    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:27.504434    4187 logs.go:276] 0 containers: []
	W0311 04:20:27.504447    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:27.504500    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:27.515285    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:27.515301    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:27.515307    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:27.526246    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:27.526256    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:27.537519    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:27.537530    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:27.575350    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:27.575359    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:27.589578    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:27.589591    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:27.603120    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:27.603131    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:27.614506    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:27.614517    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:27.626007    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:27.626017    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:27.638657    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:27.638668    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:27.643016    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:27.643027    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:27.676968    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:27.676980    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:27.716101    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:27.716113    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:27.733046    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:27.733056    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:27.745725    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:27.745739    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:27.769764    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:27.769772    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:27.783937    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:27.783949    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:27.798787    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:27.798796    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:28.698269    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:28.698546    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:28.726988    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:28.727084    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:28.740259    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:28.740335    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:28.751804    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:28.751873    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:28.762701    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:28.762775    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:28.773573    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:28.773647    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:28.788421    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:28.788490    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:28.798455    4133 logs.go:276] 0 containers: []
	W0311 04:20:28.798465    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:28.798521    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:28.808610    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:28.808631    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:28.808637    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:28.846724    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:28.846736    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:28.885039    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:28.885053    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:28.899366    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:28.899378    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:28.914488    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:28.914499    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:28.926141    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:28.926152    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:28.937104    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:28.937115    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:28.960151    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:28.960160    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:28.975094    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:28.975105    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:28.987428    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:28.987438    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:29.000269    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:29.000280    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:29.004911    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:29.004920    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:29.026500    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:29.026509    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:29.038087    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:29.038099    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:29.055045    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:29.055057    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:29.066654    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:29.066664    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:29.104347    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:29.104358    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:30.312610    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:31.624273    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:35.314898    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:35.315203    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:35.351439    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:35.351565    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:35.370437    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:35.370524    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:35.384813    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:35.384877    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:35.397389    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:35.397466    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:35.408320    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:35.408402    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:35.418625    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:35.418683    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:35.429224    4187 logs.go:276] 0 containers: []
	W0311 04:20:35.429234    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:35.429293    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:35.443853    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:35.443873    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:35.443879    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:35.458232    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:35.458243    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:35.469599    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:35.469611    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:35.480772    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:35.480783    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:35.505063    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:35.505071    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:35.543270    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:35.543279    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:35.548309    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:35.548316    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:35.566508    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:35.566523    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:35.580781    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:35.580792    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:35.595841    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:35.595850    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:35.607637    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:35.607648    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:35.642163    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:35.642174    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:35.680317    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:35.680326    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:35.692279    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:35.692290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:35.709450    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:35.709459    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:35.721207    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:35.721221    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:35.734381    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:35.734391    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:38.247423    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:36.626688    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:36.626923    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:36.661424    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:36.661507    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:36.674784    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:36.674855    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:36.686121    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:36.686191    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:36.696495    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:36.696558    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:36.707950    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:36.708014    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:36.718639    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:36.718707    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:36.728251    4133 logs.go:276] 0 containers: []
	W0311 04:20:36.728264    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:36.728315    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:36.742037    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:36.742055    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:36.742064    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:36.778031    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:36.778044    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:36.791843    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:36.791855    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:36.806278    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:36.806295    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:36.817509    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:36.817521    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:36.832853    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:36.832866    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:36.846376    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:36.846387    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:36.858525    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:36.858536    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:36.896913    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:36.896921    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:36.911390    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:36.911401    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:36.950239    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:36.950251    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:36.967132    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:36.967145    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:36.978608    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:36.978623    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:36.983213    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:36.983220    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:36.994841    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:36.994851    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:37.006780    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:37.006791    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:37.021080    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:37.021091    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:39.545208    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:43.248565    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:43.248722    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:43.260491    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:43.260566    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:43.270771    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:43.270844    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:43.281155    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:43.281226    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:43.291636    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:43.291709    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:43.301787    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:43.301855    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:43.312389    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:43.312457    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:43.322593    4187 logs.go:276] 0 containers: []
	W0311 04:20:43.322606    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:43.322659    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:43.332836    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:43.332852    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:43.332857    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:43.346958    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:43.346968    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:43.361337    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:43.361348    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:43.375191    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:43.375203    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:43.386547    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:43.386558    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:43.401599    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:43.401609    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:43.413273    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:43.413284    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:43.425650    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:43.425660    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:43.463941    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:43.463949    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:43.501987    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:43.501997    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:43.514158    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:43.514169    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:43.527220    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:43.527230    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:43.537870    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:43.537882    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:43.560194    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:43.560204    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:43.564246    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:43.564255    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:43.602691    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:43.602704    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:43.619390    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:43.619402    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:44.547343    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:44.547531    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:44.562867    4133 logs.go:276] 2 containers: [08ec4c137e8e 8c896a6db6a9]
	I0311 04:20:44.562958    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:44.574963    4133 logs.go:276] 2 containers: [d949a5f4c26f 42c9c863cbbd]
	I0311 04:20:44.575038    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:44.585804    4133 logs.go:276] 1 containers: [34abc242d0c3]
	I0311 04:20:44.585873    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:44.596493    4133 logs.go:276] 2 containers: [5265d41d1ccb 51ea7e87d708]
	I0311 04:20:44.596562    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:44.607095    4133 logs.go:276] 1 containers: [5cc0b7983e52]
	I0311 04:20:44.607159    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:44.617848    4133 logs.go:276] 2 containers: [5f24fd902deb 3947628dca50]
	I0311 04:20:44.617913    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:44.628746    4133 logs.go:276] 0 containers: []
	W0311 04:20:44.628762    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:44.628823    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:44.639051    4133 logs.go:276] 2 containers: [81b69f94f17e 39601d501305]
	I0311 04:20:44.639070    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:44.639076    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:44.644122    4133 logs.go:123] Gathering logs for etcd [d949a5f4c26f] ...
	I0311 04:20:44.644129    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d949a5f4c26f"
	I0311 04:20:44.657780    4133 logs.go:123] Gathering logs for kube-scheduler [51ea7e87d708] ...
	I0311 04:20:44.657790    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ea7e87d708"
	I0311 04:20:44.672388    4133 logs.go:123] Gathering logs for storage-provisioner [81b69f94f17e] ...
	I0311 04:20:44.672402    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b69f94f17e"
	I0311 04:20:44.683493    4133 logs.go:123] Gathering logs for kube-scheduler [5265d41d1ccb] ...
	I0311 04:20:44.683505    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5265d41d1ccb"
	I0311 04:20:44.699280    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:20:44.699294    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:44.711236    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:44.711247    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:44.749875    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:44.749887    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:44.787458    4133 logs.go:123] Gathering logs for kube-apiserver [08ec4c137e8e] ...
	I0311 04:20:44.787472    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08ec4c137e8e"
	I0311 04:20:44.801795    4133 logs.go:123] Gathering logs for kube-controller-manager [3947628dca50] ...
	I0311 04:20:44.801805    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3947628dca50"
	I0311 04:20:44.816724    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:44.816736    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:44.839545    4133 logs.go:123] Gathering logs for kube-apiserver [8c896a6db6a9] ...
	I0311 04:20:44.839555    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c896a6db6a9"
	I0311 04:20:44.881004    4133 logs.go:123] Gathering logs for etcd [42c9c863cbbd] ...
	I0311 04:20:44.881018    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42c9c863cbbd"
	I0311 04:20:44.898642    4133 logs.go:123] Gathering logs for coredns [34abc242d0c3] ...
	I0311 04:20:44.898653    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34abc242d0c3"
	I0311 04:20:44.910076    4133 logs.go:123] Gathering logs for kube-proxy [5cc0b7983e52] ...
	I0311 04:20:44.910089    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc0b7983e52"
	I0311 04:20:44.921462    4133 logs.go:123] Gathering logs for kube-controller-manager [5f24fd902deb] ...
	I0311 04:20:44.921471    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f24fd902deb"
	I0311 04:20:44.939125    4133 logs.go:123] Gathering logs for storage-provisioner [39601d501305] ...
	I0311 04:20:44.939136    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39601d501305"
	I0311 04:20:46.132845    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:47.452800    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:52.453646    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:52.453704    4133 kubeadm.go:591] duration metric: took 4m5.126602625s to restartPrimaryControlPlane
	W0311 04:20:52.453751    4133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 04:20:52.453773    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0311 04:20:53.453167    4133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 04:20:53.458060    4133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 04:20:53.460824    4133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 04:20:53.463611    4133 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 04:20:53.463618    4133 kubeadm.go:156] found existing configuration files:
	
	I0311 04:20:53.463641    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf
	I0311 04:20:53.466697    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 04:20:53.466724    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 04:20:53.469471    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf
	I0311 04:20:53.472089    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 04:20:53.472114    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 04:20:53.475131    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf
	I0311 04:20:53.478147    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 04:20:53.478177    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 04:20:53.480782    4133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf
	I0311 04:20:53.483699    4133 kubeadm.go:162] "https://control-plane.minikube.internal:50305" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50305 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 04:20:53.483723    4133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 04:20:53.486804    4133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 04:20:53.506232    4133 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0311 04:20:53.506258    4133 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 04:20:53.554244    4133 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 04:20:53.554325    4133 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 04:20:53.554374    4133 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 04:20:53.605530    4133 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 04:20:51.135203    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:51.135509    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:51.169608    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:51.169728    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:51.190667    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:51.190761    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:51.204338    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:51.204418    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:51.216019    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:51.216099    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:51.226652    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:51.226733    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:51.237473    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:51.237549    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:51.247616    4187 logs.go:276] 0 containers: []
	W0311 04:20:51.247631    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:51.247689    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:51.262299    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:51.262318    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:51.262323    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:51.273485    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:51.273497    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:51.288601    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:51.288612    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:51.302202    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:51.302212    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:51.313938    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:51.313948    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:51.324763    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:51.324774    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:51.347712    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:51.347720    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:51.388869    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:51.388881    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:51.406794    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:51.406804    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:51.444670    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:51.444682    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:51.449579    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:51.449586    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:51.485064    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:51.485076    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:51.502604    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:51.502616    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:51.516531    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:51.516541    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:51.529907    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:51.529920    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:51.545493    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:51.545504    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:51.557748    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:51.557759    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:53.613734    4133 out.go:204]   - Generating certificates and keys ...
	I0311 04:20:53.613766    4133 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 04:20:53.613796    4133 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 04:20:53.613850    4133 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 04:20:53.613891    4133 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 04:20:53.613934    4133 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 04:20:53.613969    4133 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 04:20:53.614002    4133 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 04:20:53.614034    4133 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 04:20:53.614071    4133 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 04:20:53.614120    4133 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 04:20:53.614144    4133 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 04:20:53.614171    4133 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 04:20:53.741838    4133 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 04:20:54.090972    4133 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 04:20:54.166405    4133 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 04:20:54.208171    4133 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 04:20:54.240510    4133 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 04:20:54.240849    4133 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 04:20:54.240879    4133 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 04:20:54.321338    4133 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 04:20:54.324642    4133 out.go:204]   - Booting up control plane ...
	I0311 04:20:54.324689    4133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 04:20:54.324735    4133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 04:20:54.324768    4133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 04:20:54.324819    4133 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 04:20:54.324900    4133 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 04:20:54.071672    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:58.825781    4133 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502272 seconds
	I0311 04:20:58.825847    4133 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 04:20:58.829933    4133 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 04:20:59.338979    4133 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 04:20:59.339077    4133 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-745000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 04:20:59.842840    4133 kubeadm.go:309] [bootstrap-token] Using token: kg6c8y.a4p5bpadbpysmcdj
	I0311 04:20:59.848681    4133 out.go:204]   - Configuring RBAC rules ...
	I0311 04:20:59.848737    4133 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 04:20:59.848791    4133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 04:20:59.855517    4133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 04:20:59.856440    4133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 04:20:59.857263    4133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 04:20:59.858127    4133 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 04:20:59.861471    4133 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 04:21:00.018162    4133 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 04:21:00.246782    4133 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 04:21:00.247441    4133 kubeadm.go:309] 
	I0311 04:21:00.247470    4133 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 04:21:00.247473    4133 kubeadm.go:309] 
	I0311 04:21:00.247511    4133 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 04:21:00.247517    4133 kubeadm.go:309] 
	I0311 04:21:00.247554    4133 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 04:21:00.247593    4133 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 04:21:00.247667    4133 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 04:21:00.247671    4133 kubeadm.go:309] 
	I0311 04:21:00.247694    4133 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 04:21:00.247696    4133 kubeadm.go:309] 
	I0311 04:21:00.247719    4133 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 04:21:00.247722    4133 kubeadm.go:309] 
	I0311 04:21:00.247752    4133 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 04:21:00.247821    4133 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 04:21:00.247921    4133 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 04:21:00.247925    4133 kubeadm.go:309] 
	I0311 04:21:00.247966    4133 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 04:21:00.248009    4133 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 04:21:00.248013    4133 kubeadm.go:309] 
	I0311 04:21:00.248093    4133 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kg6c8y.a4p5bpadbpysmcdj \
	I0311 04:21:00.248159    4133 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e \
	I0311 04:21:00.248173    4133 kubeadm.go:309] 	--control-plane 
	I0311 04:21:00.248175    4133 kubeadm.go:309] 
	I0311 04:21:00.248218    4133 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 04:21:00.248223    4133 kubeadm.go:309] 
	I0311 04:21:00.248264    4133 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kg6c8y.a4p5bpadbpysmcdj \
	I0311 04:21:00.248314    4133 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e 
	I0311 04:21:00.248373    4133 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 04:21:00.248380    4133 cni.go:84] Creating CNI manager for ""
	I0311 04:21:00.248387    4133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:21:00.252402    4133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 04:21:00.259336    4133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 04:21:00.263018    4133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 04:21:00.267820    4133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 04:21:00.267866    4133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-745000 minikube.k8s.io/updated_at=2024_03_11T04_21_00_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563 minikube.k8s.io/name=running-upgrade-745000 minikube.k8s.io/primary=true
	I0311 04:21:00.267866    4133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 04:21:00.310468    4133 ops.go:34] apiserver oom_adj: -16
	I0311 04:21:00.310496    4133 kubeadm.go:1106] duration metric: took 42.670792ms to wait for elevateKubeSystemPrivileges
	W0311 04:21:00.310572    4133 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 04:21:00.310578    4133 kubeadm.go:393] duration metric: took 4m12.997984292s to StartCluster
	I0311 04:21:00.310587    4133 settings.go:142] acquiring lock: {Name:mk914df43a11d01b4609d1cefd86c6d6814b7b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:21:00.310663    4133 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:21:00.311049    4133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/kubeconfig: {Name:mka1b8ea0dffa0092f34d56c8689bdb2eb2631e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:21:00.311264    4133 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:21:00.315293    4133 out.go:177] * Verifying Kubernetes components...
	I0311 04:21:00.311281    4133 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 04:21:00.311341    4133 config.go:182] Loaded profile config "running-upgrade-745000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:21:00.322292    4133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:21:00.322296    4133 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-745000"
	I0311 04:21:00.322308    4133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-745000"
	I0311 04:21:00.322293    4133 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-745000"
	I0311 04:21:00.322329    4133 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-745000"
	W0311 04:21:00.322339    4133 addons.go:243] addon storage-provisioner should already be in state true
	I0311 04:21:00.322349    4133 host.go:66] Checking if "running-upgrade-745000" exists ...
	I0311 04:21:00.323299    4133 kapi.go:59] client config for running-upgrade-745000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/running-upgrade-745000/client.key", CAFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10604ffd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 04:21:00.323411    4133 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-745000"
	W0311 04:21:00.323416    4133 addons.go:243] addon default-storageclass should already be in state true
	I0311 04:21:00.323424    4133 host.go:66] Checking if "running-upgrade-745000" exists ...
	I0311 04:21:00.328276    4133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:21:00.332334    4133 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 04:21:00.332340    4133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 04:21:00.332346    4133 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/running-upgrade-745000/id_rsa Username:docker}
	I0311 04:21:00.332921    4133 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 04:21:00.332926    4133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 04:21:00.332930    4133 sshutil.go:53] new ssh client: &{IP:localhost Port:50273 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/running-upgrade-745000/id_rsa Username:docker}
	I0311 04:21:00.402392    4133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 04:21:00.407144    4133 api_server.go:52] waiting for apiserver process to appear ...
	I0311 04:21:00.407187    4133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:21:00.411090    4133 api_server.go:72] duration metric: took 99.818542ms to wait for apiserver process to appear ...
	I0311 04:21:00.411097    4133 api_server.go:88] waiting for apiserver healthz status ...
	I0311 04:21:00.411104    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:00.441766    4133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 04:21:00.442034    4133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 04:20:59.075093    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:59.075293    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:59.094661    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:59.094759    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:59.109324    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:59.109410    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:59.121553    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:59.121620    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:59.132275    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:59.132343    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:59.146929    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:59.146995    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:59.157316    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:59.157378    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:59.167155    4187 logs.go:276] 0 containers: []
	W0311 04:20:59.167170    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:59.167221    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:59.179382    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:59.179401    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:59.179407    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:59.216776    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:59.216788    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:59.231218    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:59.231230    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:59.243232    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:59.243246    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:59.257606    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:59.257621    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:59.272999    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:59.273011    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:59.284929    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:59.284940    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:59.320283    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:59.320295    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:59.334612    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:59.334626    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:59.349812    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:59.349823    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:59.366694    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:59.366706    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:59.383199    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:59.383210    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:59.394149    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:59.394159    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:59.431107    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:59.431116    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:59.442969    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:59.442979    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:59.454342    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:59.454353    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:59.475908    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:59.475915    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:21:01.981760    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:05.412286    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:05.412333    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:06.983880    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:06.983992    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:21:07.007391    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:21:07.007469    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:21:07.018176    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:21:07.018250    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:21:07.028131    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:21:07.028196    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:21:07.042714    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:21:07.042777    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:21:07.053374    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:21:07.053459    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:21:07.063814    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:21:07.063883    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:21:07.074249    4187 logs.go:276] 0 containers: []
	W0311 04:21:07.074261    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:21:07.074318    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:21:07.091353    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:21:07.091369    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:21:07.091374    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:21:07.129345    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:21:07.129354    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:21:07.149252    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:21:07.149262    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:21:07.166966    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:21:07.166983    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:21:07.178876    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:21:07.178891    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:21:07.191613    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:21:07.191623    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:21:07.213144    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:21:07.213153    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:21:07.217051    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:21:07.217061    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:21:07.252780    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:21:07.252792    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:21:07.277918    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:21:07.277935    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:21:07.306285    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:21:07.306298    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:21:07.319145    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:21:07.319157    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:21:07.333419    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:21:07.333431    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:21:07.371861    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:21:07.371873    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:21:07.383433    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:21:07.383443    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:21:07.394934    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:21:07.394946    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:21:07.408689    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:21:07.408704    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:21:10.412884    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:10.412920    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:09.930828    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:15.413006    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:15.413033    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:14.932887    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:14.932997    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:21:14.944261    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:21:14.944332    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:21:14.955468    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:21:14.955536    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:21:14.967087    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:21:14.967156    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:21:14.977623    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:21:14.977682    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:21:14.987981    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:21:14.988050    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:21:14.998539    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:21:14.998606    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:21:15.009631    4187 logs.go:276] 0 containers: []
	W0311 04:21:15.009643    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:21:15.009698    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:21:15.019859    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:21:15.019878    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:21:15.019884    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:21:15.031099    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:21:15.031115    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:21:15.048915    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:21:15.048925    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:21:15.061506    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:21:15.061519    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:21:15.073257    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:21:15.073266    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:21:15.084199    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:21:15.084210    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:21:15.096737    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:21:15.096747    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:21:15.134120    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:21:15.134134    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:21:15.138718    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:21:15.138731    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:21:15.152618    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:21:15.152630    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:21:15.176361    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:21:15.176371    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:21:15.191749    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:21:15.191761    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:21:15.206470    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:21:15.206482    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:21:15.217916    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:21:15.217927    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:21:15.232302    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:21:15.232313    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:21:15.244441    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:21:15.244453    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:21:15.280326    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:21:15.280338    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:21:17.820995    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:20.413374    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:20.413400    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:22.822675    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:22.822755    4187 kubeadm.go:591] duration metric: took 4m4.217299542s to restartPrimaryControlPlane
	W0311 04:21:22.822823    4187 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 04:21:22.822858    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0311 04:21:23.849406    4187 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.02656125s)
	I0311 04:21:23.849492    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 04:21:23.854373    4187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 04:21:23.857201    4187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 04:21:23.859990    4187 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 04:21:23.859995    4187 kubeadm.go:156] found existing configuration files:
	
	I0311 04:21:23.860016    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/admin.conf
	I0311 04:21:23.862611    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 04:21:23.862632    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 04:21:23.865281    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/kubelet.conf
	I0311 04:21:23.868564    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 04:21:23.868594    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 04:21:23.871695    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/controller-manager.conf
	I0311 04:21:23.874166    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 04:21:23.874186    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 04:21:23.876744    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/scheduler.conf
	I0311 04:21:23.879976    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 04:21:23.879998    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 04:21:23.882802    4187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 04:21:23.900955    4187 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0311 04:21:23.900987    4187 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 04:21:23.956335    4187 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 04:21:23.956450    4187 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 04:21:23.956503    4187 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 04:21:24.006321    4187 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 04:21:24.010457    4187 out.go:204]   - Generating certificates and keys ...
	I0311 04:21:24.010495    4187 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 04:21:24.010529    4187 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 04:21:24.010579    4187 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 04:21:24.010611    4187 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 04:21:24.010648    4187 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 04:21:24.010677    4187 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 04:21:24.010712    4187 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 04:21:24.010742    4187 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 04:21:24.010786    4187 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 04:21:24.010826    4187 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 04:21:24.010843    4187 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 04:21:24.010873    4187 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 04:21:24.108215    4187 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 04:21:24.280071    4187 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 04:21:24.460963    4187 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 04:21:24.615849    4187 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 04:21:24.644300    4187 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 04:21:24.644766    4187 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 04:21:24.644910    4187 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 04:21:24.737485    4187 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 04:21:25.413658    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:25.413706    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:24.740726    4187 out.go:204]   - Booting up control plane ...
	I0311 04:21:24.740777    4187 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 04:21:24.740820    4187 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 04:21:24.740855    4187 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 04:21:24.740897    4187 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 04:21:24.741824    4187 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 04:21:30.414102    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:30.414129    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0311 04:21:30.768277    4133 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0311 04:21:29.743533    4187 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.003525 seconds
	I0311 04:21:29.743641    4187 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 04:21:29.749294    4187 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 04:21:30.259843    4187 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 04:21:30.259937    4187 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-629000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 04:21:30.771904    4187 kubeadm.go:309] [bootstrap-token] Using token: aitobb.rtfe8rta363qoqrs
	I0311 04:21:30.772628    4133 out.go:177] * Enabled addons: storage-provisioner
	I0311 04:21:30.782587    4133 addons.go:505] duration metric: took 30.472184666s for enable addons: enabled=[storage-provisioner]
	I0311 04:21:30.782585    4187 out.go:204]   - Configuring RBAC rules ...
	I0311 04:21:30.782724    4187 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 04:21:30.788029    4187 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 04:21:30.792450    4187 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 04:21:30.794046    4187 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 04:21:30.795555    4187 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 04:21:30.797211    4187 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 04:21:30.804082    4187 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 04:21:30.988323    4187 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 04:21:31.191374    4187 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 04:21:31.191817    4187 kubeadm.go:309] 
	I0311 04:21:31.191850    4187 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 04:21:31.191854    4187 kubeadm.go:309] 
	I0311 04:21:31.191902    4187 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 04:21:31.191908    4187 kubeadm.go:309] 
	I0311 04:21:31.191934    4187 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 04:21:31.191962    4187 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 04:21:31.191986    4187 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 04:21:31.191988    4187 kubeadm.go:309] 
	I0311 04:21:31.192020    4187 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 04:21:31.192026    4187 kubeadm.go:309] 
	I0311 04:21:31.192054    4187 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 04:21:31.192057    4187 kubeadm.go:309] 
	I0311 04:21:31.192085    4187 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 04:21:31.192138    4187 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 04:21:31.192178    4187 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 04:21:31.192184    4187 kubeadm.go:309] 
	I0311 04:21:31.192231    4187 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 04:21:31.192274    4187 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 04:21:31.192278    4187 kubeadm.go:309] 
	I0311 04:21:31.192322    4187 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token aitobb.rtfe8rta363qoqrs \
	I0311 04:21:31.192396    4187 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e \
	I0311 04:21:31.192406    4187 kubeadm.go:309] 	--control-plane 
	I0311 04:21:31.192410    4187 kubeadm.go:309] 
	I0311 04:21:31.192447    4187 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 04:21:31.192450    4187 kubeadm.go:309] 
	I0311 04:21:31.192502    4187 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token aitobb.rtfe8rta363qoqrs \
	I0311 04:21:31.192557    4187 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e 
	I0311 04:21:31.192743    4187 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 04:21:31.192753    4187 cni.go:84] Creating CNI manager for ""
	I0311 04:21:31.192761    4187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:21:31.200563    4187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 04:21:31.204550    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 04:21:31.207632    4187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 04:21:31.212513    4187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 04:21:31.212561    4187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 04:21:31.212569    4187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-629000 minikube.k8s.io/updated_at=2024_03_11T04_21_31_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563 minikube.k8s.io/name=stopped-upgrade-629000 minikube.k8s.io/primary=true
	I0311 04:21:31.250075    4187 kubeadm.go:1106] duration metric: took 37.557458ms to wait for elevateKubeSystemPrivileges
	I0311 04:21:31.258895    4187 ops.go:34] apiserver oom_adj: -16
	W0311 04:21:31.258928    4187 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 04:21:31.258933    4187 kubeadm.go:393] duration metric: took 4m12.668276209s to StartCluster
	I0311 04:21:31.258943    4187 settings.go:142] acquiring lock: {Name:mk914df43a11d01b4609d1cefd86c6d6814b7b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:21:31.259032    4187 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:21:31.259480    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/kubeconfig: {Name:mka1b8ea0dffa0092f34d56c8689bdb2eb2631e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:21:31.259687    4187 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:21:31.263571    4187 out.go:177] * Verifying Kubernetes components...
	I0311 04:21:31.259698    4187 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 04:21:31.259768    4187 config.go:182] Loaded profile config "stopped-upgrade-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:21:31.270449    4187 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-629000"
	I0311 04:21:31.270466    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:21:31.270476    4187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-629000"
	I0311 04:21:31.270449    4187 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-629000"
	I0311 04:21:31.270507    4187 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-629000"
	W0311 04:21:31.270511    4187 addons.go:243] addon storage-provisioner should already be in state true
	I0311 04:21:31.270545    4187 host.go:66] Checking if "stopped-upgrade-629000" exists ...
	I0311 04:21:31.271525    4187 kapi.go:59] client config for stopped-upgrade-629000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.key", CAFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f5bfd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 04:21:31.271643    4187 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-629000"
	W0311 04:21:31.271648    4187 addons.go:243] addon default-storageclass should already be in state true
	I0311 04:21:31.271655    4187 host.go:66] Checking if "stopped-upgrade-629000" exists ...
	I0311 04:21:31.276459    4187 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:21:31.280607    4187 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 04:21:31.280613    4187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 04:21:31.280619    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:21:31.281298    4187 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 04:21:31.281305    4187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 04:21:31.281309    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:21:31.346700    4187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 04:21:31.351376    4187 api_server.go:52] waiting for apiserver process to appear ...
	I0311 04:21:31.351425    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:21:31.355260    4187 api_server.go:72] duration metric: took 95.566083ms to wait for apiserver process to appear ...
	I0311 04:21:31.355267    4187 api_server.go:88] waiting for apiserver healthz status ...
	I0311 04:21:31.355273    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:31.394093    4187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 04:21:31.397183    4187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 04:21:35.414700    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:35.414796    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:36.357316    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:36.357342    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:40.415923    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:40.415944    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:41.357770    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:41.357801    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:45.417001    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:45.417024    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:46.358107    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:46.358126    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:50.417359    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:50.417392    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:51.358922    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:51.358961    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:55.419022    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:55.419043    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:56.359646    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:56.359669    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:00.420975    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:00.421074    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:00.432946    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:00.433018    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:00.459522    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:00.459594    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:00.470583    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:00.470659    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:00.480816    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:00.480879    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:00.491546    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:00.491618    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:00.502005    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:00.502073    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:00.511769    4133 logs.go:276] 0 containers: []
	W0311 04:22:00.511778    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:00.511832    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:00.521936    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:00.521951    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:00.521957    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:00.546486    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:00.546495    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:00.582279    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:00.582288    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:00.587045    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:00.587054    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:00.623322    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:00.623334    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:00.635203    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:00.635218    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:00.650152    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:00.650164    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:00.665716    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:00.665728    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:00.676747    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:00.676758    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:00.687934    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:00.687944    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:00.702395    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:00.702408    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:00.718357    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:00.718368    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:00.730250    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:00.730262    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:01.360489    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:01.360525    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0311 04:22:01.763662    4187 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0311 04:22:01.768995    4187 out.go:177] * Enabled addons: storage-provisioner
	I0311 04:22:01.776881    4187 addons.go:505] duration metric: took 30.518085375s for enable addons: enabled=[storage-provisioner]
	I0311 04:22:03.251441    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:06.361410    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:06.361451    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:08.253813    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:08.254053    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:08.270233    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:08.270330    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:08.283822    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:08.283897    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:08.294941    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:08.295008    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:08.305519    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:08.305581    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:08.315721    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:08.315796    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:08.326257    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:08.326320    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:08.336471    4133 logs.go:276] 0 containers: []
	W0311 04:22:08.336480    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:08.336544    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:08.347744    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:08.347784    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:08.347794    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:08.383139    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:08.383154    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:08.397984    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:08.397997    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:08.409294    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:08.409308    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:08.424573    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:08.424587    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:08.436729    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:08.436741    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:08.462084    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:08.462096    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:08.479152    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:08.479164    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:08.515437    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:08.515446    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:08.519676    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:08.519685    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:08.533640    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:08.533650    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:08.544736    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:08.544746    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:08.558538    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:08.558547    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:11.362824    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:11.362865    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:11.078869    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:16.364661    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:16.364677    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:16.081102    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:16.081323    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:16.100784    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:16.100858    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:16.112026    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:16.112089    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:16.123111    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:16.123181    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:16.134716    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:16.134784    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:16.145363    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:16.145443    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:16.157977    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:16.158040    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:16.168194    4133 logs.go:276] 0 containers: []
	W0311 04:22:16.168207    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:16.168267    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:16.178276    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:16.178290    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:16.178296    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:16.212946    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:16.212957    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:16.248189    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:16.248201    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:16.267036    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:16.267048    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:16.281817    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:16.281829    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:16.299054    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:16.299064    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:16.318135    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:16.318146    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:16.329586    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:16.329598    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:16.334630    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:16.334637    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:16.349495    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:16.349505    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:16.360749    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:16.360766    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:16.371526    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:16.371536    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:16.383108    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:16.383118    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:18.908778    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:21.366685    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:21.366717    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:23.913889    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:23.914129    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:23.941039    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:23.941169    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:23.959277    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:23.959363    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:23.972711    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:23.972779    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:23.988227    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:23.988298    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:24.002266    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:24.002336    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:24.012791    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:24.012862    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:24.022581    4133 logs.go:276] 0 containers: []
	W0311 04:22:24.022596    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:24.022651    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:24.032756    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:24.032774    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:24.032779    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:24.046755    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:24.046768    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:24.060621    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:24.060633    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:24.072435    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:24.072446    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:24.083664    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:24.083673    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:24.097913    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:24.097927    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:24.112502    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:24.112510    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:24.129757    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:24.129770    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:24.142600    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:24.142614    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:24.178817    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:24.178827    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:24.183282    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:24.183292    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:24.218628    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:24.218640    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:24.243427    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:24.243435    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:26.376439    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:26.376466    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:26.761676    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:31.385804    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:31.385972    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:31.400085    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:22:31.400168    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:31.422805    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:22:31.422884    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:31.439860    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:22:31.439933    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:31.450703    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:22:31.450775    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:31.461801    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:22:31.461866    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:31.476403    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:22:31.476475    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:31.488032    4187 logs.go:276] 0 containers: []
	W0311 04:22:31.488045    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:31.488104    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:31.498547    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:22:31.498563    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:31.498570    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:31.533681    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:22:31.533693    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:22:31.545716    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:22:31.545729    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:22:31.557068    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:22:31.557079    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:22:31.568851    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:31.568861    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:31.591844    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:22:31.591858    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:31.603196    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:31.603206    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:31.638277    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:22:31.638291    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:22:31.652852    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:22:31.652863    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:22:31.669418    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:22:31.669431    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:22:31.684319    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:22:31.684332    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:22:31.703208    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:22:31.703217    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:22:31.714664    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:31.714676    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:31.769155    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:31.769317    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:31.781382    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:31.781456    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:31.799926    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:31.800017    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:31.810319    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:31.810384    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:31.821564    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:31.821633    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:31.832387    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:31.832462    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:31.843517    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:31.843581    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:31.852932    4133 logs.go:276] 0 containers: []
	W0311 04:22:31.852943    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:31.852998    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:31.864150    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:31.864163    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:31.864168    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:31.868493    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:31.868498    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:31.908267    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:31.908280    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:31.922182    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:31.922195    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:31.933868    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:31.933884    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:31.951858    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:31.951873    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:31.985876    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:31.985885    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:31.997511    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:31.997525    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:32.012709    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:32.012722    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:32.027870    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:32.027881    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:32.041376    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:32.041386    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:32.063168    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:32.063180    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:32.086737    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:32.086746    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:34.605765    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:34.224649    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:39.612133    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:39.612246    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:39.625267    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:39.625365    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:39.636053    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:39.636127    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:39.646619    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:39.646693    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:39.659272    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:39.659345    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:39.669322    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:39.669393    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:39.680474    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:39.680545    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:39.691039    4133 logs.go:276] 0 containers: []
	W0311 04:22:39.691049    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:39.691103    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:39.701206    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:39.701220    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:39.701226    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:39.736524    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:39.736539    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:39.751373    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:39.751387    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:39.764679    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:39.764690    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:39.783795    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:39.783809    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:39.806599    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:39.806608    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:39.817573    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:39.817585    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:39.829801    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:39.829812    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:39.865605    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:39.865616    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:39.870558    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:39.870565    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:39.882294    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:39.882307    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:39.893055    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:39.893065    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:39.904517    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:39.904527    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:39.231300    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:39.231737    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:39.263572    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:22:39.263702    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:39.283129    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:22:39.283215    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:39.297416    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:22:39.297477    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:39.309474    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:22:39.309546    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:39.320765    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:22:39.320846    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:39.331842    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:22:39.331915    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:39.342032    4187 logs.go:276] 0 containers: []
	W0311 04:22:39.342044    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:39.342107    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:39.352219    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:22:39.352236    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:39.352243    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:39.386793    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:22:39.386806    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:22:39.399081    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:22:39.399092    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:22:39.416987    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:39.417000    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:39.421624    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:22:39.421631    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:22:39.440206    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:22:39.440218    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:22:39.455130    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:22:39.455142    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:22:39.469407    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:22:39.469421    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:22:39.484041    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:22:39.484055    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:22:39.500025    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:22:39.500039    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:22:39.511801    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:39.511816    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:39.535687    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:39.535696    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:39.570227    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:22:39.570240    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:42.085589    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:42.425661    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:47.090623    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:47.090873    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:47.115637    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:22:47.115757    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:47.135428    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:22:47.135519    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:47.148049    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:22:47.148114    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:47.159012    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:22:47.159084    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:47.169646    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:22:47.169720    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:47.180197    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:22:47.180264    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:47.190598    4187 logs.go:276] 0 containers: []
	W0311 04:22:47.190611    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:47.190674    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:47.201186    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:22:47.201202    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:22:47.201208    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:22:47.213246    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:22:47.213260    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:22:47.224968    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:22:47.224979    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:22:47.236554    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:22:47.236570    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:22:47.247848    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:47.247862    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:47.270984    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:22:47.270991    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:47.283087    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:47.283100    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:47.287216    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:47.287226    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:47.324862    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:22:47.324876    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:22:47.344798    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:22:47.344810    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:22:47.358081    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:22:47.358096    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:22:47.372779    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:22:47.372790    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:22:47.390035    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:47.390046    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:47.430386    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:47.430534    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:47.441662    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:47.441739    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:47.452283    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:47.452356    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:47.463261    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:47.463326    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:47.473532    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:47.473606    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:47.483766    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:47.483827    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:47.494161    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:47.494231    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:47.504824    4133 logs.go:276] 0 containers: []
	W0311 04:22:47.504835    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:47.504893    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:47.519153    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:47.519168    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:47.519173    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:47.533204    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:47.533214    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:47.546952    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:47.546962    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:47.565381    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:47.565393    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:47.581181    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:47.581193    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:47.593793    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:47.593804    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:47.611235    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:47.611245    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:47.627933    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:47.627944    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:47.661466    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:47.661478    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:47.673317    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:47.673326    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:47.698900    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:47.698913    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:47.703603    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:47.703612    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:47.715067    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:47.715077    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:50.252928    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:49.925250    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:55.256664    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:55.256785    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:55.267449    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:22:55.267539    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:55.278549    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:22:55.278612    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:55.289322    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:22:55.289392    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:55.306706    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:22:55.306775    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:55.317146    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:22:55.317214    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:55.330569    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:22:55.330639    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:55.340923    4133 logs.go:276] 0 containers: []
	W0311 04:22:55.340931    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:55.340984    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:55.352163    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:22:55.352176    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:55.352183    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:55.388654    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:55.388665    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:55.424100    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:22:55.424111    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:22:55.436444    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:22:55.436454    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:22:55.451544    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:22:55.451554    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:22:55.463119    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:22:55.463128    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:55.475607    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:55.475618    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:55.480511    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:22:55.480519    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:22:55.495362    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:22:55.495372    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:22:55.508966    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:22:55.508976    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:22:55.520888    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:22:55.520897    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:22:55.538506    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:22:55.538516    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:22:55.550373    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:55.550382    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:54.928960    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:54.929165    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:54.944356    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:22:54.944441    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:54.956608    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:22:54.956685    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:54.967740    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:22:54.967811    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:54.978196    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:22:54.978265    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:54.988947    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:22:54.989012    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:55.004709    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:22:55.004778    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:55.015738    4187 logs.go:276] 0 containers: []
	W0311 04:22:55.015749    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:55.015808    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:55.026582    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:22:55.026596    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:55.026601    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:55.030957    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:22:55.030964    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:22:55.046890    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:22:55.046901    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:22:55.060470    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:22:55.060482    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:22:55.076407    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:22:55.076417    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:22:55.087918    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:22:55.087928    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:55.100053    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:55.100066    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:55.133979    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:55.133990    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:55.170795    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:22:55.170812    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:22:55.182499    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:22:55.182515    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:22:55.194179    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:22:55.194190    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:22:55.211853    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:22:55.211863    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:22:55.223722    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:55.223736    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:57.749687    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:58.078205    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:02.752731    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:02.752896    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:02.771940    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:02.772027    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:02.787278    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:02.787355    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:02.800433    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:02.800496    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:02.811152    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:02.811217    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:02.822838    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:02.822903    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:02.833879    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:02.833943    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:02.844267    4187 logs.go:276] 0 containers: []
	W0311 04:23:02.844280    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:02.844335    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:02.854404    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:02.854421    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:02.854426    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:02.890671    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:02.890692    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:02.895784    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:02.895794    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:02.910449    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:02.910461    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:02.925038    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:02.925048    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:02.939970    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:02.939980    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:02.951648    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:02.951662    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:02.975206    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:02.975214    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:03.009463    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:03.009475    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:03.023920    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:03.023929    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:03.038050    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:03.038062    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:03.049399    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:03.049410    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:03.073617    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:03.073628    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:03.081343    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:03.081432    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:03.092862    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:03.092934    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:03.103746    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:03.103820    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:03.114534    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:23:03.114602    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:03.124841    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:03.124914    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:03.136206    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:03.136275    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:03.146979    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:03.147055    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:03.157638    4133 logs.go:276] 0 containers: []
	W0311 04:23:03.157650    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:03.157709    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:03.168292    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:03.168309    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:03.168314    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:03.172992    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:03.172999    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:03.208621    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:03.208632    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:03.222933    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:03.222947    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:03.236682    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:03.236691    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:03.256016    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:03.256031    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:03.267394    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:03.267408    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:03.282280    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:03.282290    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:03.293831    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:03.293841    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:03.315673    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:03.315685    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:03.326879    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:03.326888    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:03.351891    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:03.351907    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:03.386710    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:03.386719    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:05.900720    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:05.589774    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:10.903531    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:10.903640    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:10.914896    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:10.914979    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:10.927879    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:10.927952    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:10.938985    4133 logs.go:276] 2 containers: [32e95a6ab93a 5bc06b80791c]
	I0311 04:23:10.939058    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:10.949783    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:10.949849    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:10.960030    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:10.960104    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:10.970263    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:10.970325    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:10.980386    4133 logs.go:276] 0 containers: []
	W0311 04:23:10.980401    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:10.980462    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:10.990408    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:10.990421    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:10.990427    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:11.002145    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:11.002157    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:11.017384    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:11.017399    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:11.034632    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:11.034645    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:10.591597    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:10.591815    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:10.611076    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:10.611172    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:10.625609    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:10.625686    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:10.637542    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:10.637615    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:10.648308    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:10.648380    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:10.659581    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:10.659653    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:10.673768    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:10.673835    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:10.684644    4187 logs.go:276] 0 containers: []
	W0311 04:23:10.684655    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:10.684709    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:10.694803    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:10.694823    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:10.694828    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:10.712157    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:10.712168    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:10.723703    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:10.723715    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:10.728494    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:10.728502    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:10.743050    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:10.743061    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:10.761249    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:10.761261    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:10.776046    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:10.776057    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:10.787742    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:10.787753    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:10.810257    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:10.810267    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:10.821840    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:10.821851    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:10.845586    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:10.845596    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:10.879074    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:10.879082    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:10.913330    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:10.913344    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:13.431177    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:11.070050    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:11.070062    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:11.075119    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:11.075128    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:11.109187    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:11.109197    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:11.123635    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:11.123645    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:11.137585    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:11.137596    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:11.149019    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:11.149029    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:11.172028    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:11.172035    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:11.183053    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:11.183064    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:11.194448    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:11.194457    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:13.708232    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:18.433704    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:18.433860    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:18.445150    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:18.445224    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:18.455935    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:18.456012    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:18.466111    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:18.466173    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:18.479672    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:18.479734    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:18.490505    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:18.490579    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:18.501342    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:18.501407    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:18.511617    4187 logs.go:276] 0 containers: []
	W0311 04:23:18.511630    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:18.511690    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:18.522118    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:18.522132    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:18.522137    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:18.526298    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:18.526305    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:18.537559    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:18.537575    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:18.549594    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:18.549607    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:18.567523    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:18.567533    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:18.579268    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:18.579279    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:18.613030    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:18.613037    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:18.657860    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:18.657871    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:18.672584    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:18.672594    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:18.686641    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:18.686652    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:18.698464    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:18.698474    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:18.713733    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:18.713746    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:18.741950    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:18.741967    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:18.709887    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:18.709994    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:18.721458    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:18.721531    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:18.732777    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:18.732848    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:18.745419    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:18.745493    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:18.757034    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:18.757109    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:18.768211    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:18.768280    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:18.779410    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:18.779476    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:18.789760    4133 logs.go:276] 0 containers: []
	W0311 04:23:18.789770    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:18.789828    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:18.800751    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:18.800769    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:18.800775    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:18.819489    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:18.819500    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:18.832493    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:18.832505    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:18.867263    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:18.867275    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:18.882838    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:18.882851    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:18.900871    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:18.900883    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:18.925409    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:18.925420    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:18.929680    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:18.929690    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:18.943190    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:18.943201    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:18.954785    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:18.954796    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:18.966333    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:18.966344    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:18.978155    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:18.978166    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:19.013517    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:19.013526    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:19.027850    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:19.027860    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:19.038972    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:19.038984    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:21.259985    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:21.552004    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:26.262497    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:26.262745    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:26.281675    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:26.281741    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:26.295167    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:26.295225    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:26.306123    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:26.306183    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:26.316368    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:26.316420    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:26.326890    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:26.326954    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:26.337666    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:26.337724    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:26.352201    4187 logs.go:276] 0 containers: []
	W0311 04:23:26.352215    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:26.352267    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:26.363270    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:26.363285    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:26.363290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:26.375120    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:26.375130    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:26.410208    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:26.410220    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:26.423887    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:26.423900    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:26.435599    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:26.435609    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:26.447129    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:26.447140    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:26.461452    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:26.461464    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:26.476254    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:26.476265    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:26.493935    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:26.493944    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:26.498230    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:26.498239    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:26.531867    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:26.531881    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:26.546062    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:26.546073    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:26.570735    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:26.570750    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:29.089519    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:26.554365    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:26.554480    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:26.573329    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:26.573483    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:26.586746    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:26.586823    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:26.598342    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:26.598418    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:26.610997    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:26.611070    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:26.622733    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:26.622810    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:26.635018    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:26.635093    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:26.646334    4133 logs.go:276] 0 containers: []
	W0311 04:23:26.646348    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:26.646405    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:26.657621    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:26.657643    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:26.657648    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:26.671587    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:26.671598    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:26.683049    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:26.683063    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:26.695581    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:26.695594    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:26.711174    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:26.711184    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:26.723040    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:26.723049    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:26.741452    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:26.741463    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:26.765622    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:26.765633    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:26.770138    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:26.770145    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:26.785359    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:26.785369    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:26.820841    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:26.820851    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:26.856114    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:26.856125    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:26.868092    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:26.868103    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:26.879609    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:26.879623    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:26.904080    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:26.904095    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:29.417631    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:34.090188    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:34.090349    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:34.419441    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:34.419601    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:34.434707    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:34.434781    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:34.445346    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:34.445416    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:34.456246    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:34.456324    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:34.467246    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:34.467314    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:34.479582    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:34.479642    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:34.494163    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:34.494226    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:34.504456    4133 logs.go:276] 0 containers: []
	W0311 04:23:34.504468    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:34.504517    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:34.514652    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:34.514673    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:34.514678    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:34.551278    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:34.551287    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:34.564655    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:34.564665    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:34.576230    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:34.576241    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:34.580894    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:34.580904    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:34.595732    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:34.595748    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:34.611686    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:34.611697    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:34.629238    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:34.629247    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:34.641632    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:34.641649    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:34.653493    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:34.653511    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:34.678051    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:34.678059    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:34.713320    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:34.713334    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:34.724450    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:34.724459    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:34.736520    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:34.736533    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:34.748108    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:34.748121    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:34.109977    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:34.110066    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:34.123488    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:34.123565    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:34.134515    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:34.134585    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:34.145048    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:34.145116    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:34.155618    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:34.155689    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:34.171000    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:34.171072    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:34.181580    4187 logs.go:276] 0 containers: []
	W0311 04:23:34.181592    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:34.181648    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:34.192597    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:34.192616    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:34.192621    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:34.211090    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:34.211101    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:34.224857    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:34.224870    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:34.236237    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:34.236249    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:34.247839    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:34.247851    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:34.259487    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:34.259498    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:34.263612    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:34.263621    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:34.298612    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:34.298625    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:34.316416    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:34.316433    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:34.334165    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:34.334175    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:34.359458    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:34.359468    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:34.370512    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:34.370524    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:34.405512    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:34.405518    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:36.919205    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:37.261836    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:41.921418    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:41.921642    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:41.943646    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:41.943749    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:41.959591    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:41.959673    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:41.972114    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:41.972189    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:41.983108    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:41.983176    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:41.993452    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:41.993527    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:42.004040    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:42.004109    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:42.013992    4187 logs.go:276] 0 containers: []
	W0311 04:23:42.014006    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:42.014063    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:42.024601    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:42.024620    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:42.024625    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:42.036296    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:42.036306    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:42.051440    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:42.051452    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:42.063397    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:42.063407    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:42.081301    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:42.081312    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:42.116687    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:42.116698    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:42.152496    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:42.152507    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:42.166560    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:42.166575    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:42.178927    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:42.178939    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:42.190880    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:42.190893    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:42.195551    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:42.195558    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:42.210157    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:42.210171    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:42.234002    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:42.234010    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:42.264098    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:42.264213    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:42.274844    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:42.274917    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:42.285650    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:42.285715    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:42.296163    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:42.296237    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:42.307890    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:42.307957    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:42.318556    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:42.318629    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:42.329094    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:42.329165    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:42.339743    4133 logs.go:276] 0 containers: []
	W0311 04:23:42.339756    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:42.339825    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:42.350208    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:42.350225    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:42.350231    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:42.385381    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:42.385393    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:42.389496    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:42.389505    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:42.423973    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:42.423987    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:42.440617    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:42.440627    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:42.452340    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:42.452350    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:42.469110    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:42.469123    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:42.483575    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:42.483586    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:42.494986    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:42.494997    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:42.506481    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:42.506491    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:42.524510    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:42.524521    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:42.536165    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:42.536175    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:42.560941    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:42.560950    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:42.577946    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:42.577956    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:42.589418    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:42.589430    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:45.103800    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:44.747356    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:50.104629    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:50.104731    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:50.115153    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:50.115225    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:50.125259    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:50.125327    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:50.139675    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:50.139757    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:50.151582    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:50.151652    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:50.163912    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:50.163987    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:50.174842    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:50.174909    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:50.185087    4133 logs.go:276] 0 containers: []
	W0311 04:23:50.185097    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:50.185156    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:50.195722    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:50.195740    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:50.195745    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:50.200538    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:50.200543    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:50.211973    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:50.211984    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:50.223695    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:50.223704    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:50.258148    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:50.258155    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:50.292176    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:50.292187    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:23:50.307039    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:50.307049    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:50.318529    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:50.318541    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:50.333155    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:50.333167    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:50.348017    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:50.348028    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:50.361034    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:50.361048    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:50.372275    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:50.372286    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:50.388028    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:50.388039    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:50.406893    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:50.406903    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:50.424079    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:50.424093    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:49.749668    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:49.749884    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:49.771552    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:49.771677    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:49.786866    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:49.786954    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:49.799703    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:23:49.799783    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:49.810447    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:49.810516    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:49.820602    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:49.820674    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:49.831207    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:49.831277    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:49.841552    4187 logs.go:276] 0 containers: []
	W0311 04:23:49.841564    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:49.841617    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:49.851873    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:49.851890    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:49.851894    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:49.885919    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:49.885927    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:49.897712    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:49.897722    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:49.911076    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:49.911087    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:49.922969    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:49.922980    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:49.938195    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:49.938206    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:49.966176    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:49.966188    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:49.977711    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:49.977725    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:49.981992    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:49.982000    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:50.016292    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:23:50.016304    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:23:50.027712    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:50.027723    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:50.039638    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:50.039650    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:50.057030    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:50.057041    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:50.068301    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:50.068315    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:50.089986    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:23:50.090000    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:23:52.603296    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:52.949671    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:57.605524    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:57.605714    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:57.620033    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:57.620110    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:57.632064    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:57.632145    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:57.643361    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:23:57.643435    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:57.654772    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:57.654834    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:57.665232    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:57.665300    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:57.676310    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:57.676374    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:57.687139    4187 logs.go:276] 0 containers: []
	W0311 04:23:57.687154    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:57.687204    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:57.698189    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:57.698204    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:57.698209    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:57.731823    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:57.731832    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:57.747120    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:57.747129    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:57.758823    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:57.758833    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:57.762903    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:23:57.762913    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:23:57.774719    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:57.774731    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:57.792181    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:57.792192    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:57.816067    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:57.816076    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:57.830769    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:57.830779    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:57.843183    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:57.843194    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:57.860090    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:57.860101    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:57.871367    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:57.871378    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:57.883910    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:57.883921    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:57.919506    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:23:57.919516    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:23:57.935303    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:57.935316    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:57.951692    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:57.951770    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:57.962392    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:23:57.962451    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:57.972503    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:23:57.972572    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:57.983192    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:23:57.983254    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:57.994084    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:23:57.994144    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:58.004910    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:23:58.004976    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:58.016186    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:23:58.016277    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:58.026500    4133 logs.go:276] 0 containers: []
	W0311 04:23:58.026511    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:58.026569    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:58.036802    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:23:58.036820    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:23:58.036826    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:23:58.048876    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:23:58.048890    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:58.060178    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:58.060186    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:58.096302    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:23:58.096313    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:23:58.111570    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:23:58.111583    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:23:58.122885    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:23:58.122893    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:23:58.134968    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:23:58.134977    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:23:58.150275    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:23:58.150285    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:23:58.177167    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:58.177177    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:58.181723    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:58.181732    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:58.215604    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:23:58.215614    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:23:58.230825    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:23:58.230836    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:23:58.242441    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:23:58.242450    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:23:58.254586    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:58.254596    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:58.279081    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:23:58.279090    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:00.792694    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:00.450368    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:05.794842    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:05.794946    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:05.808060    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:05.808135    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:05.821182    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:05.821271    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:05.833216    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:05.833283    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:05.845047    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:05.845117    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:05.856642    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:05.856709    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:05.867982    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:05.868045    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:05.880626    4133 logs.go:276] 0 containers: []
	W0311 04:24:05.880639    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:05.880696    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:05.892070    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:05.892088    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:05.892093    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:05.904431    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:05.904445    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:05.916753    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:05.916762    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:05.934695    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:05.934706    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:05.958487    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:05.958498    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:05.971933    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:05.971943    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:05.983684    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:05.983694    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:05.995754    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:05.995765    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:06.007150    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:06.007162    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:06.026146    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:06.026158    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:06.038026    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:06.038036    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:05.452478    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:05.452576    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:05.465232    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:05.465299    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:05.476996    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:05.477063    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:05.489552    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:05.489624    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:05.500971    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:05.501050    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:05.512577    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:05.512645    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:05.524070    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:05.524136    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:05.536220    4187 logs.go:276] 0 containers: []
	W0311 04:24:05.536233    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:05.536289    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:05.548202    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:05.548220    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:05.548225    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:05.563783    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:05.563796    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:05.577040    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:05.577052    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:05.591680    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:05.591693    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:05.604959    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:05.604972    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:05.620012    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:05.620026    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:05.632842    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:05.632856    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:05.646592    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:05.646609    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:05.681762    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:05.681782    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:05.697244    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:05.697259    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:05.716112    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:05.716127    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:05.731663    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:05.731674    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:05.758416    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:05.758433    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:05.764020    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:05.764037    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:05.804764    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:05.804777    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:08.323469    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:06.073518    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:06.073530    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:06.078083    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:06.078092    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:06.114200    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:06.114211    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:06.137259    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:06.137270    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:08.650965    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:13.325595    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:13.325686    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:13.337011    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:13.337092    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:13.347828    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:13.347892    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:13.358263    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:13.358332    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:13.368610    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:13.368680    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:13.379783    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:13.379850    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:13.390539    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:13.390604    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:13.406522    4187 logs.go:276] 0 containers: []
	W0311 04:24:13.406533    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:13.406594    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:13.416937    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:13.416956    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:13.416964    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:13.460442    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:13.460456    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:13.472903    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:13.472915    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:13.485004    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:13.485017    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:13.505104    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:13.505116    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:13.517040    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:13.517052    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:13.552155    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:13.552165    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:13.556615    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:13.556623    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:13.570091    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:13.570103    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:13.582525    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:13.582538    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:13.599859    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:13.599870    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:13.613864    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:13.613873    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:13.625573    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:13.625584    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:13.650339    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:13.650350    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:13.662643    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:13.662655    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:13.653200    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:13.653287    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:13.664924    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:13.664998    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:13.677886    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:13.677961    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:13.688708    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:13.688785    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:13.699700    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:13.699771    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:13.710509    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:13.710583    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:13.721248    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:13.721314    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:13.731672    4133 logs.go:276] 0 containers: []
	W0311 04:24:13.731685    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:13.731743    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:13.743197    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:13.743214    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:13.743219    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:13.748250    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:13.748257    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:13.763012    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:13.763025    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:13.775104    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:13.775114    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:13.790103    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:13.790114    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:13.801137    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:13.801148    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:13.818007    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:13.818019    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:13.829768    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:13.829778    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:13.863649    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:13.863661    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:13.878752    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:13.878763    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:13.890950    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:13.890960    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:13.902037    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:13.902047    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:13.935621    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:13.935629    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:13.946929    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:13.946940    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:13.958301    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:13.958312    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:16.176941    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:16.483931    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:21.179183    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:21.179412    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:21.195751    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:21.195832    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:21.207113    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:21.207188    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:21.217409    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:21.217479    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:21.228553    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:21.228618    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:21.242709    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:21.242774    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:21.253485    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:21.253548    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:21.263711    4187 logs.go:276] 0 containers: []
	W0311 04:24:21.263721    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:21.263773    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:21.274452    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:21.274470    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:21.274476    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:21.278786    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:21.278794    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:21.292680    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:21.292690    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:21.304956    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:21.304971    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:21.316134    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:21.316145    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:21.331718    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:21.331729    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:21.343590    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:21.343600    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:21.363618    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:21.363630    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:21.378083    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:21.378093    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:21.389866    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:21.389877    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:21.413469    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:21.413478    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:21.424558    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:21.424568    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:21.458527    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:21.458534    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:21.493434    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:21.493447    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:21.506355    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:21.506373    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:24.020561    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:21.486177    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:21.486314    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:21.498582    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:21.498657    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:21.510299    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:21.510372    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:21.521744    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:21.521820    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:21.532344    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:21.532414    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:21.543293    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:21.543362    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:21.553671    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:21.553731    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:21.564354    4133 logs.go:276] 0 containers: []
	W0311 04:24:21.564368    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:21.564429    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:21.575421    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:21.575437    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:21.575442    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:21.593266    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:21.593277    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:21.611593    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:21.611602    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:21.622935    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:21.622948    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:21.639681    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:21.639699    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:21.651686    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:21.651700    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:21.667375    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:21.667386    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:21.680996    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:21.681007    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:21.716467    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:21.716477    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:21.721067    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:21.721078    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:21.734579    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:21.734591    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:21.746571    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:21.746582    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:21.780358    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:21.780368    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:21.792517    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:21.792528    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:21.810406    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:21.810419    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:24.335854    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:29.023046    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:29.023232    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:29.039621    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:29.039706    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:29.052453    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:29.052532    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:29.070993    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:29.071070    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:29.338079    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:29.338164    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:29.351163    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:29.351237    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:29.362455    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:29.362530    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:29.373816    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:29.373894    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:29.385131    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:29.385198    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:29.396349    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:29.396419    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:29.408658    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:29.408725    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:29.419197    4133 logs.go:276] 0 containers: []
	W0311 04:24:29.419207    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:29.419262    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:29.429407    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:29.429423    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:29.429428    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:29.441355    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:29.441373    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:29.466094    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:29.466112    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:29.478382    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:29.478393    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:29.490013    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:29.490028    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:29.503843    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:29.503857    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:29.508752    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:29.508760    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:29.543933    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:29.543945    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:29.557941    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:29.557952    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:29.569537    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:29.569546    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:29.581131    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:29.581141    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:29.616896    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:29.616904    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:29.631120    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:29.631131    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:29.643794    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:29.643804    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:29.658686    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:29.658696    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:29.104872    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:29.104944    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:29.114858    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:29.114919    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:29.125665    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:29.125729    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:29.136037    4187 logs.go:276] 0 containers: []
	W0311 04:24:29.136049    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:29.136104    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:29.146139    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:29.146154    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:29.146159    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:29.180734    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:29.180747    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:29.195199    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:29.195219    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:29.210662    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:29.210673    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:29.222288    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:29.222300    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:29.226515    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:29.226524    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:29.247180    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:29.247190    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:29.258581    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:29.258592    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:29.270723    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:29.270733    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:29.286982    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:29.286990    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:29.298572    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:29.298583    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:29.333575    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:29.333587    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:29.346486    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:29.346498    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:29.361930    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:29.361942    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:29.388796    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:29.388808    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:31.917698    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:32.179636    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:36.920186    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:36.920365    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:36.935349    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:36.935435    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:36.947215    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:36.947281    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:36.958273    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:36.958339    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:36.968773    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:36.968843    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:36.979660    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:36.979725    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:36.990482    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:36.990546    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:37.001272    4187 logs.go:276] 0 containers: []
	W0311 04:24:37.001287    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:37.001340    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:37.012379    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:37.012398    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:37.012404    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:37.048056    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:37.048070    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:37.059962    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:37.059974    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:37.083607    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:37.083616    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:37.116702    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:37.116710    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:37.121297    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:37.121303    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:37.136112    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:37.136126    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:37.149685    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:37.149697    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:37.161310    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:37.161321    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:37.172751    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:37.172761    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:37.187691    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:37.187703    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:37.200810    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:37.200821    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:37.228227    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:37.228241    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:37.244160    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:37.244171    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:37.268258    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:37.268269    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:37.181835    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:37.181987    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:37.193469    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:37.193539    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:37.205899    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:37.205968    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:37.217505    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:37.217577    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:37.228835    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:37.228947    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:37.240744    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:37.240811    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:37.253773    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:37.253848    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:37.265203    4133 logs.go:276] 0 containers: []
	W0311 04:24:37.265213    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:37.265276    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:37.276551    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:37.276570    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:37.276575    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:37.301323    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:37.301333    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:37.312803    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:37.312813    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:37.327318    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:37.327329    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:37.340897    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:37.340908    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:37.352906    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:37.352919    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:37.365247    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:37.365262    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:37.380938    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:37.380951    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:37.411611    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:37.411629    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:37.420226    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:37.420247    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:37.440319    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:37.440333    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:37.462260    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:37.462272    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:37.477321    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:37.477333    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:37.512781    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:37.512789    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:37.548272    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:37.548283    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:40.061811    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:39.783190    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:45.064009    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:45.064087    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:45.075118    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:45.075187    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:45.086915    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:45.086989    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:45.098500    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:45.098573    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:45.114532    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:45.114605    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:45.129312    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:45.129389    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:45.140642    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:45.140715    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:45.152345    4133 logs.go:276] 0 containers: []
	W0311 04:24:45.152358    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:45.152420    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:45.164187    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:45.164205    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:45.164210    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:45.187675    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:45.187685    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:45.204352    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:45.204362    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:45.215902    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:45.215911    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:45.228915    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:45.228925    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:45.244328    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:45.244339    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:45.256459    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:45.256469    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:45.291966    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:45.291975    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:45.328112    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:45.328126    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:45.344756    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:45.344767    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:45.356584    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:45.356599    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:45.375713    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:45.375732    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:45.380967    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:45.380978    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:45.397286    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:45.397301    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:45.411623    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:45.411637    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:44.783646    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:44.783881    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:44.808904    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:44.809003    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:44.823826    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:44.823896    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:44.836128    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:44.836204    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:44.847007    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:44.847077    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:44.857259    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:44.857328    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:44.868151    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:44.868226    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:44.878743    4187 logs.go:276] 0 containers: []
	W0311 04:24:44.878754    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:44.878808    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:44.889099    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:44.889112    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:44.889117    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:44.911021    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:44.911031    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:44.923016    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:44.923027    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:44.940679    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:44.940687    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:44.945291    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:44.945300    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:44.959609    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:44.959624    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:44.984584    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:44.984593    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:44.995957    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:44.995968    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:45.011336    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:45.011346    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:45.045249    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:45.045260    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:45.060256    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:45.060264    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:45.073559    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:45.073574    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:45.087168    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:45.087177    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:45.123955    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:45.123969    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:45.136895    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:45.136908    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:47.652416    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:47.925683    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:52.654742    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:52.655215    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:52.692471    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:52.692603    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:52.712551    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:52.712648    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:52.727870    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:52.727951    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:52.741295    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:52.741366    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:52.751887    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:52.751959    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:52.762870    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:52.762940    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:52.773122    4187 logs.go:276] 0 containers: []
	W0311 04:24:52.773133    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:52.773191    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:52.785557    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:52.785575    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:52.785582    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:52.820987    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:52.820999    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:52.833204    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:52.833217    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:52.845573    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:52.845585    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:52.858601    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:52.858613    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:52.874068    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:52.874081    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:52.890763    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:52.890778    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:52.902607    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:52.902616    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:52.919721    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:52.919731    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:52.932669    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:52.932681    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:52.960484    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:52.960494    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:52.986186    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:52.986200    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:53.021769    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:53.021783    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:53.026474    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:53.026486    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:53.041504    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:53.041512    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:52.928056    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:52.928196    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:52.947353    4133 logs.go:276] 1 containers: [8f9c34300be9]
	I0311 04:24:52.947425    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:52.958283    4133 logs.go:276] 1 containers: [df00a11636ed]
	I0311 04:24:52.958346    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:52.969042    4133 logs.go:276] 4 containers: [c4c19bca6ed3 abdaeaeb07d5 32e95a6ab93a 5bc06b80791c]
	I0311 04:24:52.969115    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:52.980194    4133 logs.go:276] 1 containers: [6c5c880ebe89]
	I0311 04:24:52.980274    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:52.991286    4133 logs.go:276] 1 containers: [3fa60f772584]
	I0311 04:24:52.991357    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:53.003301    4133 logs.go:276] 1 containers: [03a5de1f95a5]
	I0311 04:24:53.003367    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:53.015932    4133 logs.go:276] 0 containers: []
	W0311 04:24:53.015945    4133 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:53.015997    4133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:53.027482    4133 logs.go:276] 1 containers: [8165e3771f97]
	I0311 04:24:53.027498    4133 logs.go:123] Gathering logs for kube-proxy [3fa60f772584] ...
	I0311 04:24:53.027503    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa60f772584"
	I0311 04:24:53.040463    4133 logs.go:123] Gathering logs for kube-apiserver [8f9c34300be9] ...
	I0311 04:24:53.040474    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9c34300be9"
	I0311 04:24:53.056066    4133 logs.go:123] Gathering logs for coredns [32e95a6ab93a] ...
	I0311 04:24:53.056077    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e95a6ab93a"
	I0311 04:24:53.067901    4133 logs.go:123] Gathering logs for coredns [5bc06b80791c] ...
	I0311 04:24:53.067910    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc06b80791c"
	I0311 04:24:53.079799    4133 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:53.079811    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:53.114599    4133 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:53.114609    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:53.118759    4133 logs.go:123] Gathering logs for coredns [abdaeaeb07d5] ...
	I0311 04:24:53.118768    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abdaeaeb07d5"
	I0311 04:24:53.130012    4133 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:53.130023    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:53.153859    4133 logs.go:123] Gathering logs for etcd [df00a11636ed] ...
	I0311 04:24:53.153867    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df00a11636ed"
	I0311 04:24:53.171953    4133 logs.go:123] Gathering logs for coredns [c4c19bca6ed3] ...
	I0311 04:24:53.171970    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4c19bca6ed3"
	I0311 04:24:53.183441    4133 logs.go:123] Gathering logs for kube-scheduler [6c5c880ebe89] ...
	I0311 04:24:53.183458    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c5c880ebe89"
	I0311 04:24:53.198619    4133 logs.go:123] Gathering logs for container status ...
	I0311 04:24:53.198629    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:53.214338    4133 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:53.214351    4133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:53.251674    4133 logs.go:123] Gathering logs for kube-controller-manager [03a5de1f95a5] ...
	I0311 04:24:53.251688    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a5de1f95a5"
	I0311 04:24:53.269695    4133 logs.go:123] Gathering logs for storage-provisioner [8165e3771f97] ...
	I0311 04:24:53.269705    4133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8165e3771f97"
	I0311 04:24:55.786625    4133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:55.558877    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:25:00.786769    4133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:25:00.791010    4133 out.go:177] 
	W0311 04:25:00.795121    4133 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0311 04:25:00.795126    4133 out.go:239] * 
	W0311 04:25:00.795589    4133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:25:00.811051    4133 out.go:177] 
	I0311 04:25:00.560146    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:25:00.560306    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:25:00.571060    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:25:00.571135    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:25:00.581612    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:25:00.581670    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:25:00.592159    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:25:00.592223    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:25:00.602952    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:25:00.603009    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:25:00.613530    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:25:00.613590    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:25:00.631003    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:25:00.631060    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:25:00.641495    4187 logs.go:276] 0 containers: []
	W0311 04:25:00.641509    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:25:00.641567    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:25:00.652171    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:25:00.652189    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:25:00.652194    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:25:00.663713    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:25:00.663723    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:25:00.687347    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:25:00.687355    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:25:00.691657    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:25:00.691667    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:25:00.703538    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:25:00.703549    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:25:00.715226    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:25:00.715237    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:25:00.726041    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:25:00.726051    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:25:00.739889    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:25:00.739903    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:25:00.751765    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:25:00.751775    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:25:00.787620    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:25:00.787629    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:25:00.806003    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:25:00.806012    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:25:00.822495    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:25:00.822513    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:25:00.840574    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:25:00.840587    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:25:00.863603    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:25:00.863635    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:25:00.902150    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:25:00.902168    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:25:03.420426    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:25:08.422468    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:25:08.422849    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:25:08.472013    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:25:08.472150    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:25:08.496237    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:25:08.496330    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:25:08.515899    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:25:08.515988    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:25:08.540641    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:25:08.540711    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:25:08.552565    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:25:08.552631    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:25:08.564415    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:25:08.564482    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:25:08.574651    4187 logs.go:276] 0 containers: []
	W0311 04:25:08.574661    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:25:08.574709    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:25:08.585032    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:25:08.585050    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:25:08.585055    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:25:08.621845    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:25:08.621858    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:25:08.634166    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:25:08.634181    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:25:08.651447    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:25:08.651457    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:25:08.663695    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:25:08.663707    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:25:08.677430    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:25:08.677441    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:25:08.691648    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:25:08.691659    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:25:08.710597    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:25:08.710607    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:25:08.723077    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:25:08.723092    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:25:08.758513    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:25:08.758525    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:25:08.762756    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:25:08.762762    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:25:08.776942    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:25:08.776952    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:25:08.788773    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:25:08.788783    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:25:08.800938    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:25:08.800948    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:25:08.825733    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:25:08.825744    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:25:11.339318    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-03-11 11:15:44 UTC, ends at Mon 2024-03-11 11:25:16 UTC. --
	Mar 11 11:25:01 running-upgrade-745000 dockerd[3389]: time="2024-03-11T11:25:01.811093493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 11:25:01 running-upgrade-745000 dockerd[3389]: time="2024-03-11T11:25:01.811127451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 11:25:01 running-upgrade-745000 dockerd[3389]: time="2024-03-11T11:25:01.811133284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 11:25:01 running-upgrade-745000 dockerd[3389]: time="2024-03-11T11:25:01.811260197Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8a3e738e9b39c61657acbde2a6d0db2fc0fdd58ce5ed1b6ab637c4536ed59811 pid=18808 runtime=io.containerd.runc.v2
	Mar 11 11:25:02 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:02Z" level=error msg="ContainerStats resp: {0x40004fa100 linux}"
	Mar 11 11:25:03 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:03Z" level=error msg="ContainerStats resp: {0x40008b3f00 linux}"
	Mar 11 11:25:03 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:03Z" level=error msg="ContainerStats resp: {0x4000974080 linux}"
	Mar 11 11:25:03 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:03Z" level=error msg="ContainerStats resp: {0x400035b400 linux}"
	Mar 11 11:25:03 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:03Z" level=error msg="ContainerStats resp: {0x400035b580 linux}"
	Mar 11 11:25:03 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:03Z" level=error msg="ContainerStats resp: {0x400035b6c0 linux}"
	Mar 11 11:25:03 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:03Z" level=error msg="ContainerStats resp: {0x4000975480 linux}"
	Mar 11 11:25:03 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:03Z" level=error msg="ContainerStats resp: {0x40009ccc80 linux}"
	Mar 11 11:25:05 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:05Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 11 11:25:10 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:10Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 11 11:25:13 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:13Z" level=error msg="ContainerStats resp: {0x40008633c0 linux}"
	Mar 11 11:25:13 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:13Z" level=error msg="ContainerStats resp: {0x40004fb000 linux}"
	Mar 11 11:25:14 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:14Z" level=error msg="ContainerStats resp: {0x40009ccb80 linux}"
	Mar 11 11:25:15 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:15Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 11 11:25:15 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:15Z" level=error msg="ContainerStats resp: {0x400035a400 linux}"
	Mar 11 11:25:15 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:15Z" level=error msg="ContainerStats resp: {0x40009cc580 linux}"
	Mar 11 11:25:15 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:15Z" level=error msg="ContainerStats resp: {0x40009cc900 linux}"
	Mar 11 11:25:15 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:15Z" level=error msg="ContainerStats resp: {0x40009ccd40 linux}"
	Mar 11 11:25:15 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:15Z" level=error msg="ContainerStats resp: {0x400035b500 linux}"
	Mar 11 11:25:15 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:15Z" level=error msg="ContainerStats resp: {0x40009cd740 linux}"
	Mar 11 11:25:15 running-upgrade-745000 cri-dockerd[3228]: time="2024-03-11T11:25:15Z" level=error msg="ContainerStats resp: {0x40009cde00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8a3e738e9b39c       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   ae5d635ab1b72
	2cd6c8ea0e83d       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   1e64467ce3fee
	c4c19bca6ed33       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   1e64467ce3fee
	abdaeaeb07d52       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ae5d635ab1b72
	3fa60f7725841       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   902f93e541ede
	8165e3771f970       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   5b371c8069005
	df00a11636ed6       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   01405e04f65e8
	6c5c880ebe894       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   78a00dece79c1
	03a5de1f95a5e       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   bd56dc7ec11ae
	8f9c34300be9c       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   83ee024063e11
	
	
	==> coredns [2cd6c8ea0e83] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5860965533289044995.5660274189052878173. HINFO: read udp 10.244.0.2:49747->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5860965533289044995.5660274189052878173. HINFO: read udp 10.244.0.2:36208->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5860965533289044995.5660274189052878173. HINFO: read udp 10.244.0.2:39522->10.0.2.3:53: i/o timeout
	
	
	==> coredns [8a3e738e9b39] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5574499138920462272.446359172792744597. HINFO: read udp 10.244.0.3:55603->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5574499138920462272.446359172792744597. HINFO: read udp 10.244.0.3:46241->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5574499138920462272.446359172792744597. HINFO: read udp 10.244.0.3:54826->10.0.2.3:53: i/o timeout
	
	
	==> coredns [abdaeaeb07d5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:60639->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:51526->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:51827->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:42436->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:59431->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:33589->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:46796->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:55008->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:39492->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5771512797265465351.871192705811791164. HINFO: read udp 10.244.0.3:48036->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c4c19bca6ed3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:42100->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:59333->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:44358->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:56551->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:41619->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:57511->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:54096->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:38243->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:55823->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 780482738471552360.1688891298230540216. HINFO: read udp 10.244.0.2:52099->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-745000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-745000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=running-upgrade-745000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T04_21_00_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 11:20:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-745000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 11:25:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 11:21:00 +0000   Mon, 11 Mar 2024 11:20:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 11:21:00 +0000   Mon, 11 Mar 2024 11:20:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 11:21:00 +0000   Mon, 11 Mar 2024 11:20:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 11:21:00 +0000   Mon, 11 Mar 2024 11:21:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-745000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 61bb66b82355479b90d315cf3a7cca81
	  System UUID:                61bb66b82355479b90d315cf3a7cca81
	  Boot ID:                    f52a432f-aa59-4a0d-a231-a2d025b6335b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-g8spm                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-p2g6f                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-745000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-745000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-745000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-qdrzz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-745000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-745000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-745000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-745000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-745000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-745000 event: Registered Node running-upgrade-745000 in Controller
	
	
	==> dmesg <==
	[  +1.644195] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.084482] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.065516] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.131945] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.083622] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.075550] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +2.276859] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[Mar11 11:16] systemd-fstab-generator[1942]: Ignoring "noauto" for root device
	[  +2.747703] systemd-fstab-generator[2221]: Ignoring "noauto" for root device
	[  +0.153025] systemd-fstab-generator[2255]: Ignoring "noauto" for root device
	[  +0.130287] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +0.086703] systemd-fstab-generator[2279]: Ignoring "noauto" for root device
	[ +21.363436] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.219799] systemd-fstab-generator[3183]: Ignoring "noauto" for root device
	[  +0.097449] systemd-fstab-generator[3196]: Ignoring "noauto" for root device
	[  +0.060573] systemd-fstab-generator[3207]: Ignoring "noauto" for root device
	[  +0.078968] systemd-fstab-generator[3221]: Ignoring "noauto" for root device
	[  +2.056541] systemd-fstab-generator[3375]: Ignoring "noauto" for root device
	[  +5.724341] systemd-fstab-generator[3743]: Ignoring "noauto" for root device
	[  +1.189283] systemd-fstab-generator[3869]: Ignoring "noauto" for root device
	[Mar11 11:17] kauditd_printk_skb: 68 callbacks suppressed
	[Mar11 11:20] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.573927] systemd-fstab-generator[12096]: Ignoring "noauto" for root device
	[  +5.624985] systemd-fstab-generator[12697]: Ignoring "noauto" for root device
	[  +0.449769] systemd-fstab-generator[12830]: Ignoring "noauto" for root device
	
	
	==> etcd [df00a11636ed] <==
	{"level":"info","ts":"2024-03-11T11:20:55.883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-11T11:20:55.883Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-11T11:20:55.885Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T11:20:55.885Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T11:20:55.885Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T11:20:55.885Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-11T11:20:55.885Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-11T11:20:56.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-11T11:20:56.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-11T11:20:56.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-11T11:20:56.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-11T11:20:56.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-11T11:20:56.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-11T11:20:56.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-11T11:20:56.568Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-745000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T11:20:56.568Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T11:20:56.568Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T11:20:56.568Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T11:20:56.568Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T11:20:56.568Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T11:20:56.568Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T11:20:56.569Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T11:20:56.569Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-11T11:20:56.569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T11:20:56.569Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:25:17 up 9 min,  0 users,  load average: 0.32, 0.37, 0.23
	Linux running-upgrade-745000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8f9c34300be9] <==
	I0311 11:20:57.781354       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0311 11:20:57.781469       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0311 11:20:57.781519       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 11:20:57.781539       1 cache.go:39] Caches are synced for autoregister controller
	I0311 11:20:57.781556       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0311 11:20:57.782482       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 11:20:57.782644       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0311 11:20:58.516315       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0311 11:20:58.694225       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0311 11:20:58.701654       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0311 11:20:58.701671       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0311 11:20:58.843692       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 11:20:58.853835       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0311 11:20:58.950159       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0311 11:20:58.952085       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0311 11:20:58.952469       1 controller.go:611] quota admission added evaluator for: endpoints
	I0311 11:20:58.953649       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0311 11:20:59.817901       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0311 11:21:00.182671       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0311 11:21:00.187111       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0311 11:21:00.213201       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0311 11:21:00.255806       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 11:21:13.022437       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0311 11:21:13.221624       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0311 11:21:14.433126       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [03a5de1f95a5] <==
	I0311 11:21:12.665727       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0311 11:21:12.666760       1 shared_informer.go:262] Caches are synced for taint
	I0311 11:21:12.666789       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0311 11:21:12.666848       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-745000. Assuming now as a timestamp.
	I0311 11:21:12.666879       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0311 11:21:12.666909       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0311 11:21:12.666985       1 shared_informer.go:262] Caches are synced for TTL
	I0311 11:21:12.667050       1 event.go:294] "Event occurred" object="running-upgrade-745000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-745000 event: Registered Node running-upgrade-745000 in Controller"
	I0311 11:21:12.669496       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0311 11:21:12.670542       1 shared_informer.go:262] Caches are synced for cronjob
	I0311 11:21:12.670592       1 shared_informer.go:262] Caches are synced for PV protection
	I0311 11:21:12.674173       1 shared_informer.go:262] Caches are synced for endpoint
	I0311 11:21:12.737785       1 shared_informer.go:262] Caches are synced for resource quota
	I0311 11:21:12.766055       1 shared_informer.go:262] Caches are synced for attach detach
	I0311 11:21:12.771307       1 shared_informer.go:262] Caches are synced for resource quota
	I0311 11:21:12.830164       1 shared_informer.go:262] Caches are synced for namespace
	I0311 11:21:12.870646       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0311 11:21:12.870652       1 shared_informer.go:262] Caches are synced for service account
	I0311 11:21:13.024793       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0311 11:21:13.224528       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qdrzz"
	I0311 11:21:13.278144       1 shared_informer.go:262] Caches are synced for garbage collector
	I0311 11:21:13.278160       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0311 11:21:13.290441       1 shared_informer.go:262] Caches are synced for garbage collector
	I0311 11:21:13.673079       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-p2g6f"
	I0311 11:21:13.678270       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-g8spm"
	
	
	==> kube-proxy [3fa60f772584] <==
	I0311 11:21:14.419686       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0311 11:21:14.419708       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0311 11:21:14.419717       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0311 11:21:14.431257       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0311 11:21:14.431271       1 server_others.go:206] "Using iptables Proxier"
	I0311 11:21:14.431283       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0311 11:21:14.431383       1 server.go:661] "Version info" version="v1.24.1"
	I0311 11:21:14.431399       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 11:21:14.431782       1 config.go:317] "Starting service config controller"
	I0311 11:21:14.431789       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0311 11:21:14.431797       1 config.go:226] "Starting endpoint slice config controller"
	I0311 11:21:14.431799       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0311 11:21:14.432221       1 config.go:444] "Starting node config controller"
	I0311 11:21:14.432225       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0311 11:21:14.532504       1 shared_informer.go:262] Caches are synced for node config
	I0311 11:21:14.532508       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0311 11:21:14.532518       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [6c5c880ebe89] <==
	W0311 11:20:57.743289       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 11:20:57.743311       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 11:20:57.743336       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 11:20:57.743353       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 11:20:57.743387       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 11:20:57.743416       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 11:20:57.743446       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 11:20:57.743498       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 11:20:57.743552       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 11:20:57.743658       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 11:20:57.744871       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 11:20:57.744930       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 11:20:58.593036       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 11:20:58.593229       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 11:20:58.593036       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 11:20:58.593349       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0311 11:20:58.672825       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 11:20:58.672886       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 11:20:58.695822       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 11:20:58.695849       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 11:20:58.727345       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 11:20:58.727362       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 11:20:58.798702       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 11:20:58.798801       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0311 11:20:58.939355       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-03-11 11:15:44 UTC, ends at Mon 2024-03-11 11:25:17 UTC. --
	Mar 11 11:21:02 running-upgrade-745000 kubelet[12703]: E0311 11:21:02.431931   12703 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-745000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-745000"
	Mar 11 11:21:12 running-upgrade-745000 kubelet[12703]: I0311 11:21:12.650667   12703 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 11 11:21:12 running-upgrade-745000 kubelet[12703]: I0311 11:21:12.651095   12703 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 11 11:21:12 running-upgrade-745000 kubelet[12703]: I0311 11:21:12.675403   12703 topology_manager.go:200] "Topology Admit Handler"
	Mar 11 11:21:12 running-upgrade-745000 kubelet[12703]: I0311 11:21:12.853112   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md872\" (UniqueName: \"kubernetes.io/projected/637f0451-62e0-404b-a44f-fac402d9fe47-kube-api-access-md872\") pod \"storage-provisioner\" (UID: \"637f0451-62e0-404b-a44f-fac402d9fe47\") " pod="kube-system/storage-provisioner"
	Mar 11 11:21:12 running-upgrade-745000 kubelet[12703]: I0311 11:21:12.853142   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/637f0451-62e0-404b-a44f-fac402d9fe47-tmp\") pod \"storage-provisioner\" (UID: \"637f0451-62e0-404b-a44f-fac402d9fe47\") " pod="kube-system/storage-provisioner"
	Mar 11 11:21:12 running-upgrade-745000 kubelet[12703]: E0311 11:21:12.958604   12703 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 11 11:21:12 running-upgrade-745000 kubelet[12703]: E0311 11:21:12.958629   12703 projected.go:192] Error preparing data for projected volume kube-api-access-md872 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 11 11:21:12 running-upgrade-745000 kubelet[12703]: E0311 11:21:12.958677   12703 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/637f0451-62e0-404b-a44f-fac402d9fe47-kube-api-access-md872 podName:637f0451-62e0-404b-a44f-fac402d9fe47 nodeName:}" failed. No retries permitted until 2024-03-11 11:21:13.458658674 +0000 UTC m=+13.285774550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-md872" (UniqueName: "kubernetes.io/projected/637f0451-62e0-404b-a44f-fac402d9fe47-kube-api-access-md872") pod "storage-provisioner" (UID: "637f0451-62e0-404b-a44f-fac402d9fe47") : configmap "kube-root-ca.crt" not found
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.227467   12703 topology_manager.go:200] "Topology Admit Handler"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.254559   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b896d1c1-9368-43df-ab77-12f4d5b7a47b-xtables-lock\") pod \"kube-proxy-qdrzz\" (UID: \"b896d1c1-9368-43df-ab77-12f4d5b7a47b\") " pod="kube-system/kube-proxy-qdrzz"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.254584   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkblf\" (UniqueName: \"kubernetes.io/projected/b896d1c1-9368-43df-ab77-12f4d5b7a47b-kube-api-access-nkblf\") pod \"kube-proxy-qdrzz\" (UID: \"b896d1c1-9368-43df-ab77-12f4d5b7a47b\") " pod="kube-system/kube-proxy-qdrzz"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.254596   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b896d1c1-9368-43df-ab77-12f4d5b7a47b-kube-proxy\") pod \"kube-proxy-qdrzz\" (UID: \"b896d1c1-9368-43df-ab77-12f4d5b7a47b\") " pod="kube-system/kube-proxy-qdrzz"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.254614   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b896d1c1-9368-43df-ab77-12f4d5b7a47b-lib-modules\") pod \"kube-proxy-qdrzz\" (UID: \"b896d1c1-9368-43df-ab77-12f4d5b7a47b\") " pod="kube-system/kube-proxy-qdrzz"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: E0311 11:21:13.358420   12703 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: E0311 11:21:13.358434   12703 projected.go:192] Error preparing data for projected volume kube-api-access-nkblf for pod kube-system/kube-proxy-qdrzz: configmap "kube-root-ca.crt" not found
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: E0311 11:21:13.358456   12703 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b896d1c1-9368-43df-ab77-12f4d5b7a47b-kube-api-access-nkblf podName:b896d1c1-9368-43df-ab77-12f4d5b7a47b nodeName:}" failed. No retries permitted until 2024-03-11 11:21:13.858447953 +0000 UTC m=+13.685563788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nkblf" (UniqueName: "kubernetes.io/projected/b896d1c1-9368-43df-ab77-12f4d5b7a47b-kube-api-access-nkblf") pod "kube-proxy-qdrzz" (UID: "b896d1c1-9368-43df-ab77-12f4d5b7a47b") : configmap "kube-root-ca.crt" not found
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.677271   12703 topology_manager.go:200] "Topology Admit Handler"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.685056   12703 topology_manager.go:200] "Topology Admit Handler"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.858523   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxpqn\" (UniqueName: \"kubernetes.io/projected/2a1e17a8-871f-42e7-9263-2f5877ce7d04-kube-api-access-gxpqn\") pod \"coredns-6d4b75cb6d-g8spm\" (UID: \"2a1e17a8-871f-42e7-9263-2f5877ce7d04\") " pod="kube-system/coredns-6d4b75cb6d-g8spm"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.858560   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m884b\" (UniqueName: \"kubernetes.io/projected/f97585ca-948c-49d9-b687-2d4b237816d5-kube-api-access-m884b\") pod \"coredns-6d4b75cb6d-p2g6f\" (UID: \"f97585ca-948c-49d9-b687-2d4b237816d5\") " pod="kube-system/coredns-6d4b75cb6d-p2g6f"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.858573   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f97585ca-948c-49d9-b687-2d4b237816d5-config-volume\") pod \"coredns-6d4b75cb6d-p2g6f\" (UID: \"f97585ca-948c-49d9-b687-2d4b237816d5\") " pod="kube-system/coredns-6d4b75cb6d-p2g6f"
	Mar 11 11:21:13 running-upgrade-745000 kubelet[12703]: I0311 11:21:13.858583   12703 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a1e17a8-871f-42e7-9263-2f5877ce7d04-config-volume\") pod \"coredns-6d4b75cb6d-g8spm\" (UID: \"2a1e17a8-871f-42e7-9263-2f5877ce7d04\") " pod="kube-system/coredns-6d4b75cb6d-g8spm"
	Mar 11 11:25:02 running-upgrade-745000 kubelet[12703]: I0311 11:25:02.545319   12703 scope.go:110] "RemoveContainer" containerID="32e95a6ab93a41fc02dc5f66fcf582de7db62357b22300a7fd0df5d1b03cf06c"
	Mar 11 11:25:02 running-upgrade-745000 kubelet[12703]: I0311 11:25:02.566257   12703 scope.go:110] "RemoveContainer" containerID="5bc06b80791c29e4b15d141a32438bc1ee1bf0d1dcffe763756b1fb8c328dbf9"
	
	
	==> storage-provisioner [8165e3771f97] <==
	I0311 11:21:13.780575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 11:21:13.785220       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 11:21:13.785235       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 11:21:13.788328       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 11:21:13.788435       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-745000_52643f35-9598-47d0-97e2-c312058a0deb!
	I0311 11:21:13.788990       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd349462-0367-4c70-9257-6cc7ebb0feff", APIVersion:"v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-745000_52643f35-9598-47d0-97e2-c312058a0deb became leader
	I0311 11:21:13.889265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-745000_52643f35-9598-47d0-97e2-c312058a0deb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-745000 -n running-upgrade-745000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-745000 -n running-upgrade-745000: exit status 2 (15.539473833s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-745000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-745000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-745000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-745000: (2.152581416s)
--- FAIL: TestRunningBinaryUpgrade (634.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-368000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-368000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.805861542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-368000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-368000" primary control-plane node in "kubernetes-upgrade-368000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-368000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:14:41.904463    4028 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:14:41.904614    4028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:14:41.904617    4028 out.go:304] Setting ErrFile to fd 2...
	I0311 04:14:41.904620    4028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:14:41.904756    4028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:14:41.905809    4028 out.go:298] Setting JSON to false
	I0311 04:14:41.921723    4028 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2653,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:14:41.921782    4028 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:14:41.927297    4028 out.go:177] * [kubernetes-upgrade-368000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:14:41.941276    4028 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:14:41.941343    4028 notify.go:220] Checking for updates...
	I0311 04:14:41.950189    4028 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:14:41.954271    4028 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:14:41.957160    4028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:14:41.960168    4028 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:14:41.963258    4028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:14:41.966553    4028 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:14:41.966623    4028 config.go:182] Loaded profile config "offline-docker-255000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:14:41.966672    4028 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:14:41.971188    4028 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:14:41.978128    4028 start.go:297] selected driver: qemu2
	I0311 04:14:41.978138    4028 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:14:41.978145    4028 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:14:41.980585    4028 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:14:41.984227    4028 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:14:41.987250    4028 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 04:14:41.987283    4028 cni.go:84] Creating CNI manager for ""
	I0311 04:14:41.987290    4028 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 04:14:41.987315    4028 start.go:340] cluster config:
	{Name:kubernetes-upgrade-368000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:14:41.992204    4028 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:42.000181    4028 out.go:177] * Starting "kubernetes-upgrade-368000" primary control-plane node in "kubernetes-upgrade-368000" cluster
	I0311 04:14:42.003996    4028 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 04:14:42.004011    4028 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 04:14:42.004024    4028 cache.go:56] Caching tarball of preloaded images
	I0311 04:14:42.004087    4028 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:14:42.004095    4028 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 04:14:42.004157    4028 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/kubernetes-upgrade-368000/config.json ...
	I0311 04:14:42.004169    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/kubernetes-upgrade-368000/config.json: {Name:mkb432db7dcaf0dd048bf65daa4fc10f034a9925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:14:42.004417    4028 start.go:360] acquireMachinesLock for kubernetes-upgrade-368000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:14:42.004464    4028 start.go:364] duration metric: took 30.708µs to acquireMachinesLock for "kubernetes-upgrade-368000"
	I0311 04:14:42.004479    4028 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:14:42.004509    4028 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:14:42.012217    4028 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:14:42.030984    4028 start.go:159] libmachine.API.Create for "kubernetes-upgrade-368000" (driver="qemu2")
	I0311 04:14:42.031016    4028 client.go:168] LocalClient.Create starting
	I0311 04:14:42.031092    4028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:14:42.031125    4028 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:42.031138    4028 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:42.031185    4028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:14:42.031215    4028 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:42.031222    4028 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:42.031605    4028 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:14:42.170910    4028 main.go:141] libmachine: Creating SSH key...
	I0311 04:14:42.259852    4028 main.go:141] libmachine: Creating Disk image...
	I0311 04:14:42.259859    4028 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:14:42.260058    4028 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2
	I0311 04:14:42.272369    4028 main.go:141] libmachine: STDOUT: 
	I0311 04:14:42.272390    4028 main.go:141] libmachine: STDERR: 
	I0311 04:14:42.272446    4028 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2 +20000M
	I0311 04:14:42.283249    4028 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:14:42.283264    4028 main.go:141] libmachine: STDERR: 
	I0311 04:14:42.283283    4028 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2
	I0311 04:14:42.283287    4028 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:14:42.283322    4028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:04:ea:46:49:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2
	I0311 04:14:42.285146    4028 main.go:141] libmachine: STDOUT: 
	I0311 04:14:42.285159    4028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:14:42.285175    4028 client.go:171] duration metric: took 254.161375ms to LocalClient.Create
	I0311 04:14:44.287366    4028 start.go:128] duration metric: took 2.282886334s to createHost
	I0311 04:14:44.287459    4028 start.go:83] releasing machines lock for "kubernetes-upgrade-368000", held for 2.28305125s
	W0311 04:14:44.287506    4028 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:44.297598    4028 out.go:177] * Deleting "kubernetes-upgrade-368000" in qemu2 ...
	W0311 04:14:44.325480    4028 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:44.325512    4028 start.go:728] Will try again in 5 seconds ...
	I0311 04:14:49.327565    4028 start.go:360] acquireMachinesLock for kubernetes-upgrade-368000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:14:49.328017    4028 start.go:364] duration metric: took 339.375µs to acquireMachinesLock for "kubernetes-upgrade-368000"
	I0311 04:14:49.328163    4028 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:14:49.328387    4028 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:14:49.333308    4028 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:14:49.375257    4028 start.go:159] libmachine.API.Create for "kubernetes-upgrade-368000" (driver="qemu2")
	I0311 04:14:49.375299    4028 client.go:168] LocalClient.Create starting
	I0311 04:14:49.375417    4028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:14:49.375476    4028 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:49.375491    4028 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:49.375554    4028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:14:49.375597    4028 main.go:141] libmachine: Decoding PEM data...
	I0311 04:14:49.375609    4028 main.go:141] libmachine: Parsing certificate...
	I0311 04:14:49.376097    4028 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:14:49.561853    4028 main.go:141] libmachine: Creating SSH key...
	I0311 04:14:49.609765    4028 main.go:141] libmachine: Creating Disk image...
	I0311 04:14:49.609770    4028 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:14:49.609945    4028 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2
	I0311 04:14:49.622751    4028 main.go:141] libmachine: STDOUT: 
	I0311 04:14:49.622773    4028 main.go:141] libmachine: STDERR: 
	I0311 04:14:49.622824    4028 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2 +20000M
	I0311 04:14:49.633401    4028 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:14:49.633416    4028 main.go:141] libmachine: STDERR: 
	I0311 04:14:49.633433    4028 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2
	I0311 04:14:49.633436    4028 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:14:49.633472    4028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:8f:fb:b3:dc:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2
	I0311 04:14:49.635182    4028 main.go:141] libmachine: STDOUT: 
	I0311 04:14:49.635197    4028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:14:49.635207    4028 client.go:171] duration metric: took 259.910708ms to LocalClient.Create
	I0311 04:14:51.637318    4028 start.go:128] duration metric: took 2.308967833s to createHost
	I0311 04:14:51.637379    4028 start.go:83] releasing machines lock for "kubernetes-upgrade-368000", held for 2.309404333s
	W0311 04:14:51.637730    4028 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-368000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-368000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:51.650529    4028 out.go:177] 
	W0311 04:14:51.655540    4028 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:14:51.655592    4028 out.go:239] * 
	* 
	W0311 04:14:51.658535    4028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:14:51.667419    4028 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-368000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-368000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-368000: (3.082059625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-368000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-368000 status --format={{.Host}}: exit status 7 (48.378375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-368000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-368000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.184548792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-368000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-368000" primary control-plane node in "kubernetes-upgrade-368000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-368000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-368000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:14:54.844427    4075 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:14:54.844546    4075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:14:54.844549    4075 out.go:304] Setting ErrFile to fd 2...
	I0311 04:14:54.844551    4075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:14:54.844677    4075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:14:54.845699    4075 out.go:298] Setting JSON to false
	I0311 04:14:54.862334    4075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2666,"bootTime":1710153028,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:14:54.862400    4075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:14:54.866762    4075 out.go:177] * [kubernetes-upgrade-368000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:14:54.874727    4075 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:14:54.877740    4075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:14:54.874822    4075 notify.go:220] Checking for updates...
	I0311 04:14:54.883706    4075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:14:54.886725    4075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:14:54.888223    4075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:14:54.891656    4075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:14:54.894986    4075 config.go:182] Loaded profile config "kubernetes-upgrade-368000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0311 04:14:54.895220    4075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:14:54.899528    4075 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:14:54.906701    4075 start.go:297] selected driver: qemu2
	I0311 04:14:54.906707    4075 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:14:54.906752    4075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:14:54.908958    4075 cni.go:84] Creating CNI manager for ""
	I0311 04:14:54.908972    4075 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:14:54.909002    4075 start.go:340] cluster config:
	{Name:kubernetes-upgrade-368000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-368000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:14:54.912945    4075 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:14:54.919685    4075 out.go:177] * Starting "kubernetes-upgrade-368000" primary control-plane node in "kubernetes-upgrade-368000" cluster
	I0311 04:14:54.923650    4075 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 04:14:54.923662    4075 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0311 04:14:54.923670    4075 cache.go:56] Caching tarball of preloaded images
	I0311 04:14:54.923721    4075 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:14:54.923726    4075 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0311 04:14:54.923769    4075 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/kubernetes-upgrade-368000/config.json ...
	I0311 04:14:54.924215    4075 start.go:360] acquireMachinesLock for kubernetes-upgrade-368000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:14:54.924241    4075 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "kubernetes-upgrade-368000"
	I0311 04:14:54.924248    4075 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:14:54.924254    4075 fix.go:54] fixHost starting: 
	I0311 04:14:54.924362    4075 fix.go:112] recreateIfNeeded on kubernetes-upgrade-368000: state=Stopped err=<nil>
	W0311 04:14:54.924370    4075 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:14:54.932760    4075 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-368000" ...
	I0311 04:14:54.936662    4075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:8f:fb:b3:dc:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2
	I0311 04:14:54.938692    4075 main.go:141] libmachine: STDOUT: 
	I0311 04:14:54.938708    4075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:14:54.938739    4075 fix.go:56] duration metric: took 14.486375ms for fixHost
	I0311 04:14:54.938744    4075 start.go:83] releasing machines lock for "kubernetes-upgrade-368000", held for 14.499666ms
	W0311 04:14:54.938751    4075 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:14:54.938786    4075 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:54.938790    4075 start.go:728] Will try again in 5 seconds ...
	I0311 04:14:59.939087    4075 start.go:360] acquireMachinesLock for kubernetes-upgrade-368000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:14:59.939460    4075 start.go:364] duration metric: took 278.167µs to acquireMachinesLock for "kubernetes-upgrade-368000"
	I0311 04:14:59.939588    4075 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:14:59.939633    4075 fix.go:54] fixHost starting: 
	I0311 04:14:59.940297    4075 fix.go:112] recreateIfNeeded on kubernetes-upgrade-368000: state=Stopped err=<nil>
	W0311 04:14:59.940323    4075 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:14:59.946595    4075 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-368000" ...
	I0311 04:14:59.951682    4075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:8f:fb:b3:dc:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubernetes-upgrade-368000/disk.qcow2
	I0311 04:14:59.961987    4075 main.go:141] libmachine: STDOUT: 
	I0311 04:14:59.962058    4075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:14:59.962139    4075 fix.go:56] duration metric: took 22.533458ms for fixHost
	I0311 04:14:59.962155    4075 start.go:83] releasing machines lock for "kubernetes-upgrade-368000", held for 22.673583ms
	W0311 04:14:59.962374    4075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-368000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-368000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:14:59.971265    4075 out.go:177] 
	W0311 04:14:59.974694    4075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:14:59.974772    4075 out.go:239] * 
	* 
	W0311 04:14:59.976709    4075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:14:59.984601    4075 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-368000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-368000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-368000 version --output=json: exit status 1 (60.235292ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-368000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-11 04:15:00.059715 -0700 PDT m=+2437.592689792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-368000 -n kubernetes-upgrade-368000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-368000 -n kubernetes-upgrade-368000: exit status 7 (35.070792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-368000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-368000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-368000
--- FAIL: TestKubernetesUpgrade (18.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (637.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3607815745 start -p stopped-upgrade-629000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3607815745 start -p stopped-upgrade-629000 --memory=2200 --vm-driver=qemu2 : (1m42.514387916s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3607815745 -p stopped-upgrade-629000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3607815745 -p stopped-upgrade-629000 stop: (12.098694416s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-629000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0311 04:17:18.920475    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 04:19:15.843760    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 04:20:35.757319    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 04:23:38.859529    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 04:24:15.869346    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-629000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.090810917s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-629000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-629000" primary control-plane node in "stopped-upgrade-629000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-629000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:16:49.073217    4187 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:16:49.073362    4187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:16:49.073366    4187 out.go:304] Setting ErrFile to fd 2...
	I0311 04:16:49.073368    4187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:16:49.073527    4187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:16:49.074988    4187 out.go:298] Setting JSON to false
	I0311 04:16:49.094309    4187 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2781,"bootTime":1710153028,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:16:49.094387    4187 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:16:49.099084    4187 out.go:177] * [stopped-upgrade-629000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:16:49.110042    4187 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:16:49.106131    4187 notify.go:220] Checking for updates...
	I0311 04:16:49.117882    4187 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:16:49.122057    4187 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:16:49.125102    4187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:16:49.126270    4187 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:16:49.129008    4187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:16:49.132347    4187 config.go:182] Loaded profile config "stopped-upgrade-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:16:49.135044    4187 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 04:16:49.138039    4187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:16:49.142062    4187 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:16:49.149017    4187 start.go:297] selected driver: qemu2
	I0311 04:16:49.149036    4187 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50372 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:16:49.149115    4187 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:16:49.152203    4187 cni.go:84] Creating CNI manager for ""
	I0311 04:16:49.152229    4187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:16:49.152253    4187 start.go:340] cluster config:
	{Name:stopped-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50372 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:16:49.152344    4187 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:16:49.156076    4187 out.go:177] * Starting "stopped-upgrade-629000" primary control-plane node in "stopped-upgrade-629000" cluster
	I0311 04:16:49.162068    4187 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 04:16:49.162128    4187 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0311 04:16:49.162145    4187 cache.go:56] Caching tarball of preloaded images
	I0311 04:16:49.162285    4187 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:16:49.162293    4187 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0311 04:16:49.162375    4187 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/config.json ...
	I0311 04:16:49.162678    4187 start.go:360] acquireMachinesLock for stopped-upgrade-629000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:16:49.162716    4187 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "stopped-upgrade-629000"
	I0311 04:16:49.162727    4187 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:16:49.162734    4187 fix.go:54] fixHost starting: 
	I0311 04:16:49.162850    4187 fix.go:112] recreateIfNeeded on stopped-upgrade-629000: state=Stopped err=<nil>
	W0311 04:16:49.162880    4187 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:16:49.173017    4187 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-629000" ...
	I0311 04:16:49.177162    4187 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50310-:22,hostfwd=tcp::50311-:2376,hostname=stopped-upgrade-629000 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/disk.qcow2
	I0311 04:16:49.223829    4187 main.go:141] libmachine: STDOUT: 
	I0311 04:16:49.223870    4187 main.go:141] libmachine: STDERR: 
	I0311 04:16:49.223877    4187 main.go:141] libmachine: Waiting for VM to start (ssh -p 50310 docker@127.0.0.1)...
	I0311 04:17:08.616419    4187 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/config.json ...
	I0311 04:17:08.616675    4187 machine.go:94] provisionDockerMachine start ...
	I0311 04:17:08.616727    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:08.616891    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:08.616897    4187 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 04:17:08.676262    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 04:17:08.676285    4187 buildroot.go:166] provisioning hostname "stopped-upgrade-629000"
	I0311 04:17:08.676348    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:08.676474    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:08.676481    4187 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-629000 && echo "stopped-upgrade-629000" | sudo tee /etc/hostname
	I0311 04:17:08.735868    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-629000
	
	I0311 04:17:08.735922    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:08.736025    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:08.736034    4187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-629000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-629000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-629000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 04:17:08.792892    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 04:17:08.792905    4187 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18350-986/.minikube CaCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18350-986/.minikube}
	I0311 04:17:08.792919    4187 buildroot.go:174] setting up certificates
	I0311 04:17:08.792924    4187 provision.go:84] configureAuth start
	I0311 04:17:08.792928    4187 provision.go:143] copyHostCerts
	I0311 04:17:08.793008    4187 exec_runner.go:144] found /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem, removing ...
	I0311 04:17:08.793014    4187 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem
	I0311 04:17:08.793126    4187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/ca.pem (1082 bytes)
	I0311 04:17:08.793291    4187 exec_runner.go:144] found /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem, removing ...
	I0311 04:17:08.793295    4187 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem
	I0311 04:17:08.793351    4187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/cert.pem (1123 bytes)
	I0311 04:17:08.793477    4187 exec_runner.go:144] found /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem, removing ...
	I0311 04:17:08.793480    4187 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem
	I0311 04:17:08.793525    4187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18350-986/.minikube/key.pem (1675 bytes)
	I0311 04:17:08.793607    4187 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-629000 san=[127.0.0.1 localhost minikube stopped-upgrade-629000]
	I0311 04:17:08.908450    4187 provision.go:177] copyRemoteCerts
	I0311 04:17:08.908496    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 04:17:08.908505    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:17:08.938813    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 04:17:08.945602    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 04:17:08.952491    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 04:17:08.959831    4187 provision.go:87] duration metric: took 166.898958ms to configureAuth
	I0311 04:17:08.959839    4187 buildroot.go:189] setting minikube options for container-runtime
	I0311 04:17:08.959950    4187 config.go:182] Loaded profile config "stopped-upgrade-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:17:08.959996    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:08.960093    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:08.960098    4187 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0311 04:17:09.014554    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0311 04:17:09.014563    4187 buildroot.go:70] root file system type: tmpfs
	I0311 04:17:09.014624    4187 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0311 04:17:09.014670    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:09.014787    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:09.014819    4187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0311 04:17:09.073899    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0311 04:17:09.076965    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:09.077114    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:09.077121    4187 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0311 04:17:09.421306    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0311 04:17:09.421319    4187 machine.go:97] duration metric: took 804.661917ms to provisionDockerMachine
	I0311 04:17:09.421326    4187 start.go:293] postStartSetup for "stopped-upgrade-629000" (driver="qemu2")
	I0311 04:17:09.421332    4187 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 04:17:09.421403    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 04:17:09.421412    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:17:09.451363    4187 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 04:17:09.452777    4187 info.go:137] Remote host: Buildroot 2021.02.12
	I0311 04:17:09.452785    4187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18350-986/.minikube/addons for local assets ...
	I0311 04:17:09.452860    4187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18350-986/.minikube/files for local assets ...
	I0311 04:17:09.452968    4187 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem -> 14342.pem in /etc/ssl/certs
	I0311 04:17:09.453091    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 04:17:09.456101    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem --> /etc/ssl/certs/14342.pem (1708 bytes)
	I0311 04:17:09.463543    4187 start.go:296] duration metric: took 42.213208ms for postStartSetup
	I0311 04:17:09.463557    4187 fix.go:56] duration metric: took 20.301429709s for fixHost
	I0311 04:17:09.463591    4187 main.go:141] libmachine: Using SSH client type: native
	I0311 04:17:09.463697    4187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c6da90] 0x104c702f0 <nil>  [] 0s} localhost 50310 <nil> <nil>}
	I0311 04:17:09.463702    4187 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0311 04:17:09.515973    4187 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710155829.216220087
	
	I0311 04:17:09.515980    4187 fix.go:216] guest clock: 1710155829.216220087
	I0311 04:17:09.515984    4187 fix.go:229] Guest: 2024-03-11 04:17:09.216220087 -0700 PDT Remote: 2024-03-11 04:17:09.463558 -0700 PDT m=+20.415560376 (delta=-247.337913ms)
	I0311 04:17:09.515994    4187 fix.go:200] guest clock delta is within tolerance: -247.337913ms
	I0311 04:17:09.515996    4187 start.go:83] releasing machines lock for "stopped-upgrade-629000", held for 20.353880042s
	I0311 04:17:09.516058    4187 ssh_runner.go:195] Run: cat /version.json
	I0311 04:17:09.516069    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:17:09.516058    4187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 04:17:09.516102    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	W0311 04:17:09.516587    4187 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50310: connect: connection refused
	I0311 04:17:09.516608    4187 retry.go:31] will retry after 326.227949ms: dial tcp [::1]:50310: connect: connection refused
	W0311 04:17:09.897526    4187 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0311 04:17:09.897716    4187 ssh_runner.go:195] Run: systemctl --version
	I0311 04:17:09.902636    4187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 04:17:09.906749    4187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 04:17:09.906808    4187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0311 04:17:09.913394    4187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0311 04:17:09.922622    4187 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 04:17:09.922639    4187 start.go:494] detecting cgroup driver to use...
	I0311 04:17:09.922770    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 04:17:09.934822    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0311 04:17:09.939495    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 04:17:09.943701    4187 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 04:17:09.943744    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 04:17:09.947756    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 04:17:09.951716    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 04:17:09.955424    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 04:17:09.958912    4187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 04:17:09.962140    4187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 04:17:09.965149    4187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 04:17:09.968176    4187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 04:17:09.971275    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:10.039542    4187 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 04:17:10.050639    4187 start.go:494] detecting cgroup driver to use...
	I0311 04:17:10.050707    4187 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0311 04:17:10.055723    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 04:17:10.060283    4187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 04:17:10.066696    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 04:17:10.071317    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 04:17:10.075876    4187 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0311 04:17:10.141275    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 04:17:10.147107    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 04:17:10.153067    4187 ssh_runner.go:195] Run: which cri-dockerd
	I0311 04:17:10.154496    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0311 04:17:10.157195    4187 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0311 04:17:10.161993    4187 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0311 04:17:10.224421    4187 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0311 04:17:10.293386    4187 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0311 04:17:10.293452    4187 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0311 04:17:10.298748    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:10.364602    4187 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 04:17:11.509035    4187 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.144440416s)
	I0311 04:17:11.509170    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0311 04:17:11.514093    4187 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0311 04:17:11.520163    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 04:17:11.525043    4187 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0311 04:17:11.587667    4187 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0311 04:17:11.651516    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:11.715915    4187 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0311 04:17:11.722194    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 04:17:11.726422    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:11.796990    4187 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0311 04:17:11.836303    4187 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0311 04:17:11.836375    4187 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0311 04:17:11.839691    4187 start.go:562] Will wait 60s for crictl version
	I0311 04:17:11.839744    4187 ssh_runner.go:195] Run: which crictl
	I0311 04:17:11.841276    4187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 04:17:11.855644    4187 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0311 04:17:11.855714    4187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 04:17:11.871429    4187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 04:17:11.890419    4187 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0311 04:17:11.890557    4187 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0311 04:17:11.891780    4187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 04:17:11.895799    4187 kubeadm.go:877] updating cluster {Name:stopped-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50372 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0311 04:17:11.895842    4187 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 04:17:11.895881    4187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 04:17:11.906127    4187 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 04:17:11.906135    4187 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 04:17:11.906176    4187 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 04:17:11.909134    4187 ssh_runner.go:195] Run: which lz4
	I0311 04:17:11.910323    4187 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0311 04:17:11.911430    4187 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 04:17:11.911442    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0311 04:17:12.620673    4187 docker.go:649] duration metric: took 710.403334ms to copy over tarball
	I0311 04:17:12.620725    4187 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 04:17:13.788120    4187 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.167414375s)
	I0311 04:17:13.788134    4187 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 04:17:13.803933    4187 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 04:17:13.807362    4187 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0311 04:17:13.812202    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:13.876458    4187 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 04:17:15.465146    4187 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.588718875s)
	I0311 04:17:15.465233    4187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 04:17:15.476013    4187 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 04:17:15.476030    4187 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 04:17:15.476036    4187 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 04:17:15.484652    4187 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:15.484791    4187 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:15.484965    4187 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:15.484982    4187 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0311 04:17:15.485052    4187 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:15.485066    4187 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:15.485562    4187 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:15.485792    4187 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:15.493920    4187 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:15.493968    4187 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0311 04:17:15.494021    4187 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:15.494230    4187 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:15.494197    4187 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:15.494303    4187 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:15.494209    4187 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:15.494807    4187 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:17.430532    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:17.471000    4187 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0311 04:17:17.471047    4187 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:17.471146    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0311 04:17:17.492399    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0311 04:17:17.495196    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0311 04:17:17.511371    4187 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0311 04:17:17.511395    4187 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0311 04:17:17.511455    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0311 04:17:17.522223    4187 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0311 04:17:17.522356    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:17.524526    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0311 04:17:17.524624    4187 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0311 04:17:17.537234    4187 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0311 04:17:17.537254    4187 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:17.537266    4187 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0311 04:17:17.537281    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0311 04:17:17.537305    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0311 04:17:17.549278    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0311 04:17:17.549382    4187 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0311 04:17:17.550555    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:17.551256    4187 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0311 04:17:17.551271    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0311 04:17:17.560339    4187 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0311 04:17:17.560354    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0311 04:17:17.577370    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:17.583517    4187 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0311 04:17:17.583538    4187 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:17.583586    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0311 04:17:17.590716    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:17.593308    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:17.608387    4187 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0311 04:17:17.611024    4187 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0311 04:17:17.611047    4187 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:17.611098    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0311 04:17:17.624292    4187 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0311 04:17:17.624307    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0311 04:17:17.640324    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0311 04:17:17.640322    4187 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0311 04:17:17.640342    4187 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0311 04:17:17.640359    4187 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:17.640359    4187 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:17.640406    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0311 04:17:17.640435    4187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 04:17:17.641667    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0311 04:17:17.685175    4187 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0311 04:17:17.685212    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0311 04:17:17.685224    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0311 04:17:17.933130    4187 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0311 04:17:17.933568    4187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:17.965831    4187 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0311 04:17:17.965870    4187 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:17.965984    4187 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:17:17.989359    4187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 04:17:17.989492    4187 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0311 04:17:17.991227    4187 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0311 04:17:17.991247    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0311 04:17:18.017255    4187 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 04:17:18.017269    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0311 04:17:18.247189    4187 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 04:17:18.247239    4187 cache_images.go:92] duration metric: took 2.771274167s to LoadCachedImages
	W0311 04:17:18.247274    4187 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0311 04:17:18.247282    4187 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0311 04:17:18.247335    4187 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-629000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 04:17:18.247393    4187 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0311 04:17:18.261113    4187 cni.go:84] Creating CNI manager for ""
	I0311 04:17:18.261125    4187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:17:18.261130    4187 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 04:17:18.261140    4187 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-629000 NodeName:stopped-upgrade-629000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 04:17:18.261203    4187 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-629000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 04:17:18.261256    4187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0311 04:17:18.264263    4187 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 04:17:18.264296    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 04:17:18.267463    4187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0311 04:17:18.272628    4187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 04:17:18.277736    4187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0311 04:17:18.282706    4187 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0311 04:17:18.283921    4187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 04:17:18.287827    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:17:18.352398    4187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 04:17:18.358118    4187 certs.go:68] Setting up /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000 for IP: 10.0.2.15
	I0311 04:17:18.358125    4187 certs.go:194] generating shared ca certs ...
	I0311 04:17:18.358134    4187 certs.go:226] acquiring lock for ca certs: {Name:mk0eff4ed47e91bcbb09c749a04fbf8f2901eda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:17:18.358278    4187 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key
	I0311 04:17:18.358322    4187 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key
	I0311 04:17:18.358327    4187 certs.go:256] generating profile certs ...
	I0311 04:17:18.358398    4187 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.key
	I0311 04:17:18.358415    4187 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key.33256dc5
	I0311 04:17:18.358429    4187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt.33256dc5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0311 04:17:18.463977    4187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt.33256dc5 ...
	I0311 04:17:18.463995    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt.33256dc5: {Name:mk880e1d74fdec3c125cfeb3e8aa66f979538b13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:17:18.464295    4187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key.33256dc5 ...
	I0311 04:17:18.464300    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key.33256dc5: {Name:mkb0249819e3f4a19648b4a9e7b9bb2b95cec646 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:17:18.464431    4187 certs.go:381] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt.33256dc5 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt
	I0311 04:17:18.464577    4187 certs.go:385] copying /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key.33256dc5 -> /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key
	I0311 04:17:18.464729    4187 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/proxy-client.key
	I0311 04:17:18.464864    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434.pem (1338 bytes)
	W0311 04:17:18.464892    4187 certs.go:480] ignoring /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434_empty.pem, impossibly tiny 0 bytes
	I0311 04:17:18.464898    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 04:17:18.464916    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem (1082 bytes)
	I0311 04:17:18.464933    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem (1123 bytes)
	I0311 04:17:18.464948    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/certs/key.pem (1675 bytes)
	I0311 04:17:18.464992    4187 certs.go:484] found cert: /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem (1708 bytes)
	I0311 04:17:18.465318    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 04:17:18.472574    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 04:17:18.479863    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 04:17:18.487158    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 04:17:18.494274    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 04:17:18.500945    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 04:17:18.507985    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 04:17:18.515456    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 04:17:18.523491    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/certs/1434.pem --> /usr/share/ca-certificates/1434.pem (1338 bytes)
	I0311 04:17:18.531706    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/ssl/certs/14342.pem --> /usr/share/ca-certificates/14342.pem (1708 bytes)
	I0311 04:17:18.539162    4187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 04:17:18.546477    4187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 04:17:18.551879    4187 ssh_runner.go:195] Run: openssl version
	I0311 04:17:18.554048    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14342.pem && ln -fs /usr/share/ca-certificates/14342.pem /etc/ssl/certs/14342.pem"
	I0311 04:17:18.557391    4187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14342.pem
	I0311 04:17:18.558757    4187 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 10:43 /usr/share/ca-certificates/14342.pem
	I0311 04:17:18.558778    4187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14342.pem
	I0311 04:17:18.560503    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14342.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 04:17:18.563889    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 04:17:18.567418    4187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:17:18.568952    4187 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:17:18.568982    4187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 04:17:18.570813    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 04:17:18.573896    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1434.pem && ln -fs /usr/share/ca-certificates/1434.pem /etc/ssl/certs/1434.pem"
	I0311 04:17:18.576920    4187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1434.pem
	I0311 04:17:18.578942    4187 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 10:43 /usr/share/ca-certificates/1434.pem
	I0311 04:17:18.578981    4187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1434.pem
	I0311 04:17:18.580841    4187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1434.pem /etc/ssl/certs/51391683.0"
	I0311 04:17:18.584448    4187 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 04:17:18.586052    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 04:17:18.587897    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 04:17:18.589968    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 04:17:18.592100    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 04:17:18.594179    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 04:17:18.596124    4187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 04:17:18.598182    4187 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50372 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 04:17:18.598259    4187 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 04:17:18.609251    4187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 04:17:18.612708    4187 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 04:17:18.612716    4187 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 04:17:18.612718    4187 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 04:17:18.612742    4187 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 04:17:18.616534    4187 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 04:17:18.616799    4187 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-629000" does not appear in /Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:17:18.616895    4187 kubeconfig.go:62] /Users/jenkins/minikube-integration/18350-986/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-629000" cluster setting kubeconfig missing "stopped-upgrade-629000" context setting]
	I0311 04:17:18.617107    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/kubeconfig: {Name:mka1b8ea0dffa0092f34d56c8689bdb2eb2631e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:17:18.617535    4187 kapi.go:59] client config for stopped-upgrade-629000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.key", CAFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f5bfd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 04:17:18.617834    4187 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 04:17:18.620723    4187 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-629000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0311 04:17:18.620728    4187 kubeadm.go:1153] stopping kube-system containers ...
	I0311 04:17:18.620767    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 04:17:18.631287    4187 docker.go:483] Stopping containers: [a673dc823c5e fc1103117f22 2edd01543dcf 870860a04f07 cda83ca956bb 47ea3d48656f e28e02ee3daa 0ff9bfcb7135]
	I0311 04:17:18.631358    4187 ssh_runner.go:195] Run: docker stop a673dc823c5e fc1103117f22 2edd01543dcf 870860a04f07 cda83ca956bb 47ea3d48656f e28e02ee3daa 0ff9bfcb7135
	I0311 04:17:18.650245    4187 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 04:17:18.655132    4187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 04:17:18.658019    4187 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 04:17:18.658025    4187 kubeadm.go:156] found existing configuration files:
	
	I0311 04:17:18.658047    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/admin.conf
	I0311 04:17:18.660433    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 04:17:18.660456    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 04:17:18.663440    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/kubelet.conf
	I0311 04:17:18.666106    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 04:17:18.666135    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 04:17:18.668550    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/controller-manager.conf
	I0311 04:17:18.671481    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 04:17:18.671502    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 04:17:18.674377    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/scheduler.conf
	I0311 04:17:18.676756    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 04:17:18.676777    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 04:17:18.679918    4187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 04:17:18.682792    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:18.706579    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:19.349298    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:19.463524    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:19.485999    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 04:17:19.523466    4187 api_server.go:52] waiting for apiserver process to appear ...
	I0311 04:17:19.523552    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:17:20.025584    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:17:20.525556    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:17:20.529732    4187 api_server.go:72] duration metric: took 1.006297833s to wait for apiserver process to appear ...
	I0311 04:17:20.529741    4187 api_server.go:88] waiting for apiserver healthz status ...
	I0311 04:17:20.529749    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:25.531660    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:25.531681    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:30.531748    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:30.531795    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:35.532023    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:35.532064    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:40.532481    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:40.532561    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:45.533720    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:45.533786    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:50.534690    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:50.534739    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:17:55.535888    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:17:55.536022    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:00.537829    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:00.537899    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:05.538696    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:05.538770    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:10.541234    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:10.541375    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:15.543853    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:15.543912    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:20.545240    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:20.545437    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:20.562758    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:20.562844    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:20.576180    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:20.576264    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:20.587599    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:20.587675    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:20.598295    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:20.598375    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:20.608360    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:20.608429    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:20.618427    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:20.618491    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:20.628528    4187 logs.go:276] 0 containers: []
	W0311 04:18:20.628539    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:20.628596    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:20.638958    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:20.638975    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:20.638993    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:20.656495    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:20.656505    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:20.696159    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:20.696167    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:20.714337    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:20.714346    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:20.726491    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:20.726500    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:20.737544    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:20.737555    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:20.750084    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:20.750096    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:20.767584    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:20.767594    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:20.779028    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:20.779042    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:20.783551    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:20.783558    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:20.866691    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:20.866705    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:20.881749    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:20.881766    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:20.926928    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:20.926939    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:20.942731    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:20.942742    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:20.954945    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:20.954957    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:20.978944    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:20.978951    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:20.993392    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:20.993402    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:23.506823    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:28.509038    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:28.509185    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:28.523677    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:28.523776    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:28.535961    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:28.536045    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:28.547244    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:28.547318    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:28.557690    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:28.557764    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:28.568395    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:28.568467    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:28.579095    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:28.579184    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:28.589466    4187 logs.go:276] 0 containers: []
	W0311 04:18:28.589477    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:28.589534    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:28.600007    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:28.600025    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:28.600030    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:28.614070    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:28.614081    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:28.629259    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:28.629270    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:28.641569    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:28.641581    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:28.680889    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:28.680902    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:28.698078    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:28.698087    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:28.710176    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:28.710192    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:28.736314    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:28.736328    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:28.775327    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:28.775343    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:28.792740    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:28.792753    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:28.804731    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:28.804741    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:28.808822    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:28.808831    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:28.823267    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:28.823279    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:28.836582    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:28.836596    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:28.855086    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:28.855096    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:28.866780    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:28.866789    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:28.903852    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:28.903864    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:31.421573    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:36.422721    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:36.422891    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:36.437598    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:36.437669    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:36.452205    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:36.452273    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:36.463188    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:36.463255    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:36.473651    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:36.473723    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:36.484769    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:36.484837    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:36.501260    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:36.501348    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:36.512056    4187 logs.go:276] 0 containers: []
	W0311 04:18:36.512067    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:36.512121    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:36.523222    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:36.523245    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:36.523250    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:36.535265    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:36.535274    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:36.570845    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:36.570859    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:36.585443    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:36.585454    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:36.599705    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:36.599715    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:36.613860    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:36.613875    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:36.625502    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:36.625511    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:36.641499    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:36.641509    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:36.646397    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:36.646404    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:36.661423    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:36.661433    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:36.679401    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:36.679412    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:36.692943    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:36.692954    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:36.716395    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:36.716403    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:36.728140    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:36.728150    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:36.769444    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:36.769455    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:36.786255    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:36.786266    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:36.797944    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:36.797954    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:39.338715    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:44.341132    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:44.341366    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:44.365647    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:44.365802    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:44.382001    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:44.382091    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:44.394920    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:44.394998    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:44.406512    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:44.406598    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:44.417418    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:44.417487    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:44.428248    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:44.428315    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:44.438625    4187 logs.go:276] 0 containers: []
	W0311 04:18:44.438638    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:44.438698    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:44.448995    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:44.449015    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:44.449021    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:44.460956    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:44.460971    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:44.478768    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:44.478783    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:44.518119    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:44.518126    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:44.534630    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:44.534640    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:44.553274    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:44.553286    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:44.567661    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:44.567672    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:44.582038    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:44.582052    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:44.595157    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:44.595171    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:44.599302    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:44.599308    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:44.613029    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:44.613040    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:44.625252    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:44.625263    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:44.661539    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:44.661549    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:44.701592    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:44.701609    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:44.713083    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:44.713094    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:44.724411    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:44.724420    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:44.736370    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:44.736381    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:47.262981    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:18:52.264896    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:18:52.265091    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:18:52.286223    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:18:52.286320    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:18:52.301361    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:18:52.301435    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:18:52.316010    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:18:52.316083    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:18:52.327362    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:18:52.327437    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:18:52.338010    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:18:52.338085    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:18:52.349097    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:18:52.349167    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:18:52.359345    4187 logs.go:276] 0 containers: []
	W0311 04:18:52.359360    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:18:52.359419    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:18:52.369802    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:18:52.369826    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:18:52.369831    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:18:52.387439    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:18:52.387450    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:18:52.403645    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:18:52.403666    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:18:52.415396    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:18:52.415408    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:18:52.454222    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:18:52.454231    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:18:52.458716    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:18:52.458724    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:18:52.495937    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:18:52.495949    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:18:52.507157    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:18:52.507167    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:18:52.518828    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:18:52.518837    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:18:52.530751    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:18:52.530763    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:18:52.544764    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:18:52.544777    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:18:52.556018    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:18:52.556032    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:18:52.570505    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:18:52.570519    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:18:52.585136    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:18:52.585150    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:18:52.623578    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:18:52.623590    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:18:52.642862    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:18:52.642876    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:18:52.654745    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:18:52.654757    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:18:55.181805    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:00.184083    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:00.184479    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:00.217202    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:00.217353    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:00.237069    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:00.237163    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:00.251242    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:00.251318    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:00.262703    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:00.262783    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:00.273081    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:00.273151    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:00.283701    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:00.283777    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:00.293986    4187 logs.go:276] 0 containers: []
	W0311 04:19:00.293997    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:00.294054    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:00.305015    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:00.305032    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:00.305038    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:00.316750    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:00.316758    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:00.360430    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:00.360443    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:00.374536    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:00.374547    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:00.388277    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:00.388290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:00.403996    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:00.404010    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:00.408750    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:00.408760    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:00.443533    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:00.443548    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:00.455751    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:00.455762    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:00.468996    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:00.469007    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:00.479997    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:00.480010    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:00.505167    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:00.505181    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:00.519470    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:00.519480    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:00.559089    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:00.559098    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:00.572821    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:00.572833    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:00.587146    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:00.587156    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:00.598940    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:00.598952    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:03.116516    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:08.119160    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:08.119515    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:08.157851    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:08.158013    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:08.176982    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:08.177078    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:08.190862    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:08.190940    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:08.202944    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:08.203014    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:08.213307    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:08.213364    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:08.223768    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:08.223828    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:08.233508    4187 logs.go:276] 0 containers: []
	W0311 04:19:08.233517    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:08.233568    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:08.244232    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:08.244252    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:08.244257    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:08.284003    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:08.284016    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:08.298151    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:08.298162    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:08.309985    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:08.309996    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:08.321789    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:08.321800    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:08.334456    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:08.334470    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:08.348973    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:08.348985    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:08.366250    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:08.366263    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:08.380071    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:08.380083    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:08.404940    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:08.404952    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:08.409419    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:08.409427    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:08.420639    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:08.420650    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:08.435960    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:08.435976    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:08.448883    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:08.448895    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:08.489449    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:08.489464    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:08.535458    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:08.535469    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:08.549771    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:08.549785    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:11.063404    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:16.063873    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:16.064188    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:16.096357    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:16.096481    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:16.114852    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:16.114948    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:16.128738    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:16.128822    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:16.144014    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:16.144087    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:16.154578    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:16.154647    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:16.165152    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:16.165218    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:16.175304    4187 logs.go:276] 0 containers: []
	W0311 04:19:16.175316    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:16.175377    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:16.185737    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:16.185755    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:16.185761    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:16.222843    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:16.222850    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:16.236453    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:16.236463    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:16.247751    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:16.247761    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:16.259413    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:16.259425    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:16.263521    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:16.263527    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:16.299446    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:16.299459    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:16.313923    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:16.313934    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:16.326782    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:16.326791    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:16.339156    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:16.339168    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:16.356601    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:16.356611    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:16.381059    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:16.381067    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:16.419300    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:16.419311    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:16.436073    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:16.436085    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:16.447661    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:16.447674    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:16.462596    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:16.462610    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:16.476206    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:16.476220    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:18.990583    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:23.992700    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:23.992931    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:24.012395    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:24.012488    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:24.025795    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:24.025863    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:24.038186    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:24.038257    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:24.049161    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:24.049232    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:24.060431    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:24.060491    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:24.071206    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:24.073123    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:24.087467    4187 logs.go:276] 0 containers: []
	W0311 04:19:24.087479    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:24.087539    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:24.097592    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:24.097611    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:24.097616    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:24.111449    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:24.111460    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:24.122824    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:24.122835    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:24.134305    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:24.134314    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:24.173560    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:24.173569    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:24.190980    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:24.190989    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:24.202887    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:24.202898    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:24.226954    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:24.226961    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:24.263146    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:24.263157    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:24.277136    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:24.277147    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:24.291494    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:24.291505    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:24.308212    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:24.308221    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:24.320059    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:24.320071    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:24.324023    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:24.324030    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:24.363153    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:24.363165    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:24.374866    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:24.374876    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:24.385943    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:24.385957    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:26.902343    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:31.904896    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:31.905217    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:31.933279    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:31.933408    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:31.952402    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:31.952498    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:31.965969    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:31.966044    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:31.977305    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:31.977375    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:31.987898    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:31.987962    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:31.998603    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:31.998677    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:32.008990    4187 logs.go:276] 0 containers: []
	W0311 04:19:32.009001    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:32.009060    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:32.019221    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:32.019238    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:32.019243    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:32.031040    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:32.031051    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:32.035170    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:32.035177    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:32.078165    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:32.078179    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:32.124243    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:32.124253    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:32.138206    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:32.138219    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:32.155537    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:32.155548    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:32.166685    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:32.166696    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:32.178472    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:32.178486    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:32.214724    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:32.214736    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:32.235282    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:32.235295    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:32.246865    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:32.246879    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:32.260280    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:32.260289    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:32.272191    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:32.272202    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:32.288799    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:32.288810    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:32.301860    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:32.301872    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:32.326220    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:32.326229    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:34.842107    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:39.843186    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:39.843359    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:39.856260    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:39.856332    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:39.869074    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:39.869142    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:39.879930    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:39.879992    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:39.894363    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:39.894425    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:39.904426    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:39.904494    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:39.914953    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:39.915026    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:39.925165    4187 logs.go:276] 0 containers: []
	W0311 04:19:39.925179    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:39.925231    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:39.936113    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:39.936139    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:39.936145    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:39.940427    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:39.940437    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:39.975303    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:39.975314    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:39.989678    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:39.989689    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:40.001133    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:40.001146    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:40.012421    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:40.012434    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:40.035887    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:40.035894    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:40.072904    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:40.072918    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:40.110159    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:40.110171    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:40.121475    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:40.121488    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:40.135570    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:40.135583    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:40.153514    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:40.153525    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:40.164998    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:40.165009    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:40.179536    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:40.179546    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:40.191372    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:40.191384    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:40.204337    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:40.204349    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:40.216183    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:40.216195    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:42.732567    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:47.734690    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:47.734838    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:47.751366    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:47.751454    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:47.764308    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:47.764381    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:47.774925    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:47.774996    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:47.785625    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:47.785706    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:47.795861    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:47.795923    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:47.806157    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:47.806229    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:47.816488    4187 logs.go:276] 0 containers: []
	W0311 04:19:47.816499    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:47.816580    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:47.829816    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:47.829831    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:47.829836    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:47.842430    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:47.842445    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:47.854348    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:47.854359    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:47.871357    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:47.871369    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:47.883350    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:47.883359    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:47.896727    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:47.896738    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:47.921474    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:47.921481    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:47.925621    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:47.925627    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:47.963567    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:47.963576    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:47.978501    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:47.978512    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:48.017491    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:48.017499    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:48.054483    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:48.054494    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:48.066216    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:48.066228    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:48.085715    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:48.085730    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:48.097154    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:48.097165    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:48.108095    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:48.108106    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:48.122053    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:48.122065    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:50.638275    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:19:55.640775    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:19:55.641171    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:19:55.674470    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:19:55.674602    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:19:55.693465    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:19:55.693566    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:19:55.710054    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:19:55.710126    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:19:55.722610    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:19:55.722693    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:19:55.733377    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:19:55.733448    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:19:55.744079    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:19:55.744151    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:19:55.754421    4187 logs.go:276] 0 containers: []
	W0311 04:19:55.754431    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:19:55.754485    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:19:55.771586    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:19:55.771603    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:19:55.771610    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:19:55.782826    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:19:55.782837    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:19:55.803774    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:19:55.803791    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:19:55.822008    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:19:55.822020    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:19:55.836249    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:19:55.836264    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:19:55.849933    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:19:55.849944    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:19:55.861282    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:19:55.861292    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:19:55.872539    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:19:55.872548    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:19:55.909486    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:19:55.909495    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:19:55.913532    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:19:55.913537    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:19:55.950648    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:19:55.950660    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:19:55.969825    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:19:55.969837    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:19:55.984256    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:19:55.984269    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:19:55.998272    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:19:55.998285    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:19:56.009792    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:19:56.009805    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:19:56.033382    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:19:56.033389    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:19:56.071604    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:19:56.071616    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:19:58.588295    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:03.590515    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:03.590678    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:03.611660    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:03.611759    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:03.627492    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:03.627571    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:03.655873    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:03.655947    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:03.668150    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:03.668280    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:03.685013    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:03.685078    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:03.696537    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:03.696614    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:03.707073    4187 logs.go:276] 0 containers: []
	W0311 04:20:03.707083    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:03.707136    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:03.717947    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:03.717966    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:03.717973    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:03.729995    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:03.730006    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:03.734617    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:03.734627    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:03.770503    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:03.770517    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:03.785661    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:03.785670    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:03.797592    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:03.797603    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:03.812232    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:03.812244    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:03.851072    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:03.851087    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:03.864985    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:03.864996    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:03.877872    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:03.877881    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:03.889748    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:03.889760    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:03.913176    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:03.913185    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:03.953051    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:03.953062    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:03.966922    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:03.966932    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:03.979071    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:03.979083    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:03.996276    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:03.996290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:04.008899    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:04.008914    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:06.521108    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:11.523209    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:11.523388    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:11.548253    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:11.548344    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:11.562998    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:11.563081    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:11.574193    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:11.574253    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:11.584567    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:11.584640    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:11.595047    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:11.595127    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:11.605750    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:11.605812    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:11.615541    4187 logs.go:276] 0 containers: []
	W0311 04:20:11.615551    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:11.615600    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:11.625903    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:11.625921    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:11.625926    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:11.660733    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:11.660744    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:11.672279    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:11.672290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:11.689147    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:11.689157    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:11.701235    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:11.701248    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:11.712815    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:11.712826    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:11.726258    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:11.726268    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:11.740209    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:11.740219    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:11.752219    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:11.752233    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:11.768074    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:11.768088    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:11.772349    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:11.772356    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:11.788664    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:11.788673    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:11.826360    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:11.826370    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:11.838138    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:11.838149    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:11.877426    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:11.877446    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:11.892910    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:11.892923    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:11.907647    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:11.907658    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:14.430445    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:19.432771    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:19.433177    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:19.468123    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:19.468266    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:19.487696    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:19.487797    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:19.506813    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:19.506893    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:19.520149    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:19.520223    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:19.530504    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:19.530571    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:19.544743    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:19.544819    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:19.558906    4187 logs.go:276] 0 containers: []
	W0311 04:20:19.558921    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:19.558980    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:19.570284    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:19.570307    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:19.570314    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:19.581485    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:19.581497    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:19.603884    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:19.603892    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:19.639970    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:19.639977    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:19.660600    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:19.660610    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:19.671980    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:19.671992    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:19.684433    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:19.684447    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:19.709069    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:19.709081    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:19.744433    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:19.744446    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:19.782785    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:19.782796    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:19.797785    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:19.797796    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:19.813881    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:19.813890    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:19.818740    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:19.818750    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:19.835381    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:19.835394    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:19.846939    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:19.846950    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:19.866522    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:19.866533    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:19.881024    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:19.881035    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:22.393735    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:27.396043    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:27.396359    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:27.426005    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:27.426122    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:27.445357    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:27.445454    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:27.460224    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:27.460305    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:27.472678    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:27.472748    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:27.483128    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:27.483192    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:27.494060    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:27.494138    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:27.504434    4187 logs.go:276] 0 containers: []
	W0311 04:20:27.504447    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:27.504500    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:27.515285    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:27.515301    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:27.515307    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:27.526246    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:27.526256    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:27.537519    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:27.537530    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:27.575350    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:27.575359    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:27.589578    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:27.589591    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:27.603120    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:27.603131    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:27.614506    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:27.614517    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:27.626007    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:27.626017    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:27.638657    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:27.638668    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:27.643016    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:27.643027    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:27.676968    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:27.676980    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:27.716101    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:27.716113    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:27.733046    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:27.733056    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:27.745725    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:27.745739    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:27.769764    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:27.769772    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:27.783937    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:27.783949    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:27.798787    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:27.798796    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:30.312610    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:35.314898    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:35.315203    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:35.351439    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:35.351565    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:35.370437    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:35.370524    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:35.384813    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:35.384877    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:35.397389    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:35.397466    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:35.408320    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:35.408402    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:35.418625    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:35.418683    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:35.429224    4187 logs.go:276] 0 containers: []
	W0311 04:20:35.429234    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:35.429293    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:35.443853    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:35.443873    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:35.443879    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:35.458232    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:35.458243    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:35.469599    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:35.469611    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:35.480772    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:35.480783    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:35.505063    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:35.505071    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:35.543270    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:35.543279    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:35.548309    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:35.548316    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:35.566508    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:35.566523    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:35.580781    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:35.580792    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:35.595841    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:35.595850    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:35.607637    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:35.607648    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:35.642163    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:35.642174    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:35.680317    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:35.680326    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:35.692279    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:35.692290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:35.709450    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:35.709459    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:35.721207    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:35.721221    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:35.734381    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:35.734391    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:38.247423    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:43.248565    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:43.248722    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:43.260491    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:43.260566    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:43.270771    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:43.270844    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:43.281155    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:43.281226    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:43.291636    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:43.291709    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:43.301787    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:43.301855    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:43.312389    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:43.312457    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:43.322593    4187 logs.go:276] 0 containers: []
	W0311 04:20:43.322606    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:43.322659    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:43.332836    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:43.332852    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:43.332857    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:43.346958    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:43.346968    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:43.361337    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:43.361348    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:43.375191    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:43.375203    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:43.386547    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:43.386558    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:43.401599    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:43.401609    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:43.413273    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:43.413284    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:43.425650    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:43.425660    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:43.463941    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:43.463949    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:43.501987    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:43.501997    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:43.514158    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:43.514169    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:43.527220    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:43.527230    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:43.537870    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:43.537882    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:43.560194    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:43.560204    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:43.564246    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:43.564255    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:43.602691    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:43.602704    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:43.619390    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:43.619402    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:46.132845    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:51.135203    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:51.135509    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:51.169608    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:51.169728    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:51.190667    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:51.190761    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:51.204338    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:51.204418    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:51.216019    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:51.216099    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:51.226652    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:51.226733    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:51.237473    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:51.237549    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:51.247616    4187 logs.go:276] 0 containers: []
	W0311 04:20:51.247631    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:51.247689    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:51.262299    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:51.262318    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:51.262323    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:51.273485    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:51.273497    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:51.288601    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:51.288612    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:51.302202    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:51.302212    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:51.313938    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:51.313948    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:51.324763    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:51.324774    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:51.347712    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:51.347720    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:51.388869    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:51.388881    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:51.406794    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:51.406804    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:51.444670    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:51.444682    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:20:51.449579    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:51.449586    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:51.485064    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:51.485076    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:51.502604    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:51.502616    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:51.516531    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:51.516541    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:51.529907    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:51.529920    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:51.545493    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:51.545504    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:51.557748    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:51.557759    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:54.071672    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:20:59.075093    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:20:59.075293    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:20:59.094661    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:20:59.094759    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:20:59.109324    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:20:59.109410    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:20:59.121553    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:20:59.121620    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:20:59.132275    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:20:59.132343    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:20:59.146929    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:20:59.146995    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:20:59.157316    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:20:59.157378    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:20:59.167155    4187 logs.go:276] 0 containers: []
	W0311 04:20:59.167170    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:20:59.167221    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:20:59.179382    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:20:59.179401    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:20:59.179407    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:20:59.216776    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:20:59.216788    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:20:59.231218    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:20:59.231230    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:20:59.243232    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:20:59.243246    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:20:59.257606    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:20:59.257621    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:20:59.272999    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:20:59.273011    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:20:59.284929    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:20:59.284940    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:20:59.320283    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:20:59.320295    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:20:59.334612    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:20:59.334626    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:20:59.349812    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:20:59.349823    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:20:59.366694    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:20:59.366706    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:20:59.383199    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:20:59.383210    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:20:59.394149    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:20:59.394159    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:20:59.431107    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:20:59.431116    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:20:59.442969    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:20:59.442979    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:20:59.454342    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:20:59.454353    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:20:59.475908    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:20:59.475915    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:21:01.981760    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:06.983880    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:06.983992    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:21:07.007391    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:21:07.007469    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:21:07.018176    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:21:07.018250    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:21:07.028131    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:21:07.028196    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:21:07.042714    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:21:07.042777    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:21:07.053374    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:21:07.053459    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:21:07.063814    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:21:07.063883    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:21:07.074249    4187 logs.go:276] 0 containers: []
	W0311 04:21:07.074261    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:21:07.074318    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:21:07.091353    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:21:07.091369    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:21:07.091374    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:21:07.129345    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:21:07.129354    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:21:07.149252    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:21:07.149262    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:21:07.166966    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:21:07.166983    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:21:07.178876    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:21:07.178891    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:21:07.191613    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:21:07.191623    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:21:07.213144    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:21:07.213153    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:21:07.217051    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:21:07.217061    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:21:07.252780    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:21:07.252792    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:21:07.277918    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:21:07.277935    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:21:07.306285    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:21:07.306298    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:21:07.319145    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:21:07.319157    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:21:07.333419    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:21:07.333431    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:21:07.371861    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:21:07.371873    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:21:07.383433    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:21:07.383443    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:21:07.394934    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:21:07.394946    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:21:07.408689    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:21:07.408704    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:21:09.930828    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:14.932887    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:14.932997    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:21:14.944261    4187 logs.go:276] 2 containers: [3f6e8cee7efa 2edd01543dcf]
	I0311 04:21:14.944332    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:21:14.955468    4187 logs.go:276] 2 containers: [385a0783fcaf 870860a04f07]
	I0311 04:21:14.955536    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:21:14.967087    4187 logs.go:276] 1 containers: [5615f586da83]
	I0311 04:21:14.967156    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:21:14.977623    4187 logs.go:276] 2 containers: [358739d9f929 fc1103117f22]
	I0311 04:21:14.977682    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:21:14.987981    4187 logs.go:276] 1 containers: [b5f7e20d0df4]
	I0311 04:21:14.988050    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:21:14.998539    4187 logs.go:276] 2 containers: [2e1de406265e a673dc823c5e]
	I0311 04:21:14.998606    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:21:15.009631    4187 logs.go:276] 0 containers: []
	W0311 04:21:15.009643    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:21:15.009698    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:21:15.019859    4187 logs.go:276] 2 containers: [fd6ebddcdb70 f9dc957c5a19]
	I0311 04:21:15.019878    4187 logs.go:123] Gathering logs for coredns [5615f586da83] ...
	I0311 04:21:15.019884    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5615f586da83"
	I0311 04:21:15.031099    4187 logs.go:123] Gathering logs for kube-controller-manager [2e1de406265e] ...
	I0311 04:21:15.031115    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e1de406265e"
	I0311 04:21:15.048915    4187 logs.go:123] Gathering logs for kube-controller-manager [a673dc823c5e] ...
	I0311 04:21:15.048925    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a673dc823c5e"
	I0311 04:21:15.061506    4187 logs.go:123] Gathering logs for storage-provisioner [fd6ebddcdb70] ...
	I0311 04:21:15.061519    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6ebddcdb70"
	I0311 04:21:15.073257    4187 logs.go:123] Gathering logs for storage-provisioner [f9dc957c5a19] ...
	I0311 04:21:15.073266    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9dc957c5a19"
	I0311 04:21:15.084199    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:21:15.084210    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:21:15.096737    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:21:15.096747    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:21:15.134120    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:21:15.134134    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:21:15.138718    4187 logs.go:123] Gathering logs for kube-apiserver [3f6e8cee7efa] ...
	I0311 04:21:15.138731    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f6e8cee7efa"
	I0311 04:21:15.152618    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:21:15.152630    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:21:15.176361    4187 logs.go:123] Gathering logs for kube-scheduler [fc1103117f22] ...
	I0311 04:21:15.176371    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc1103117f22"
	I0311 04:21:15.191749    4187 logs.go:123] Gathering logs for etcd [870860a04f07] ...
	I0311 04:21:15.191761    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870860a04f07"
	I0311 04:21:15.206470    4187 logs.go:123] Gathering logs for kube-scheduler [358739d9f929] ...
	I0311 04:21:15.206482    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 358739d9f929"
	I0311 04:21:15.217916    4187 logs.go:123] Gathering logs for etcd [385a0783fcaf] ...
	I0311 04:21:15.217927    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385a0783fcaf"
	I0311 04:21:15.232302    4187 logs.go:123] Gathering logs for kube-proxy [b5f7e20d0df4] ...
	I0311 04:21:15.232313    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5f7e20d0df4"
	I0311 04:21:15.244441    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:21:15.244453    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:21:15.280326    4187 logs.go:123] Gathering logs for kube-apiserver [2edd01543dcf] ...
	I0311 04:21:15.280338    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edd01543dcf"
	I0311 04:21:17.820995    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:22.822675    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:22.822755    4187 kubeadm.go:591] duration metric: took 4m4.217299542s to restartPrimaryControlPlane
	W0311 04:21:22.822823    4187 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 04:21:22.822858    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0311 04:21:23.849406    4187 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.02656125s)
	I0311 04:21:23.849492    4187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 04:21:23.854373    4187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 04:21:23.857201    4187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 04:21:23.859990    4187 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 04:21:23.859995    4187 kubeadm.go:156] found existing configuration files:
	
	I0311 04:21:23.860016    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/admin.conf
	I0311 04:21:23.862611    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 04:21:23.862632    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 04:21:23.865281    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/kubelet.conf
	I0311 04:21:23.868564    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 04:21:23.868594    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 04:21:23.871695    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/controller-manager.conf
	I0311 04:21:23.874166    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 04:21:23.874186    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 04:21:23.876744    4187 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/scheduler.conf
	I0311 04:21:23.879976    4187 kubeadm.go:162] "https://control-plane.minikube.internal:50372" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50372 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 04:21:23.879998    4187 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 04:21:23.882802    4187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 04:21:23.900955    4187 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0311 04:21:23.900987    4187 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 04:21:23.956335    4187 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 04:21:23.956450    4187 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 04:21:23.956503    4187 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 04:21:24.006321    4187 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 04:21:24.010457    4187 out.go:204]   - Generating certificates and keys ...
	I0311 04:21:24.010495    4187 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 04:21:24.010529    4187 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 04:21:24.010579    4187 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 04:21:24.010611    4187 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 04:21:24.010648    4187 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 04:21:24.010677    4187 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 04:21:24.010712    4187 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 04:21:24.010742    4187 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 04:21:24.010786    4187 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 04:21:24.010826    4187 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 04:21:24.010843    4187 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 04:21:24.010873    4187 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 04:21:24.108215    4187 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 04:21:24.280071    4187 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 04:21:24.460963    4187 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 04:21:24.615849    4187 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 04:21:24.644300    4187 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 04:21:24.644766    4187 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 04:21:24.644910    4187 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 04:21:24.737485    4187 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 04:21:24.740726    4187 out.go:204]   - Booting up control plane ...
	I0311 04:21:24.740777    4187 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 04:21:24.740820    4187 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 04:21:24.740855    4187 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 04:21:24.740897    4187 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 04:21:24.741824    4187 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 04:21:29.743533    4187 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.003525 seconds
	I0311 04:21:29.743641    4187 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 04:21:29.749294    4187 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 04:21:30.259843    4187 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 04:21:30.259937    4187 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-629000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 04:21:30.771904    4187 kubeadm.go:309] [bootstrap-token] Using token: aitobb.rtfe8rta363qoqrs
	I0311 04:21:30.782585    4187 out.go:204]   - Configuring RBAC rules ...
	I0311 04:21:30.782724    4187 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 04:21:30.788029    4187 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 04:21:30.792450    4187 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 04:21:30.794046    4187 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 04:21:30.795555    4187 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 04:21:30.797211    4187 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 04:21:30.804082    4187 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 04:21:30.988323    4187 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 04:21:31.191374    4187 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 04:21:31.191817    4187 kubeadm.go:309] 
	I0311 04:21:31.191850    4187 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 04:21:31.191854    4187 kubeadm.go:309] 
	I0311 04:21:31.191902    4187 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 04:21:31.191908    4187 kubeadm.go:309] 
	I0311 04:21:31.191934    4187 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 04:21:31.191962    4187 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 04:21:31.191986    4187 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 04:21:31.191988    4187 kubeadm.go:309] 
	I0311 04:21:31.192020    4187 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 04:21:31.192026    4187 kubeadm.go:309] 
	I0311 04:21:31.192054    4187 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 04:21:31.192057    4187 kubeadm.go:309] 
	I0311 04:21:31.192085    4187 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 04:21:31.192138    4187 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 04:21:31.192178    4187 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 04:21:31.192184    4187 kubeadm.go:309] 
	I0311 04:21:31.192231    4187 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 04:21:31.192274    4187 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 04:21:31.192278    4187 kubeadm.go:309] 
	I0311 04:21:31.192322    4187 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token aitobb.rtfe8rta363qoqrs \
	I0311 04:21:31.192396    4187 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e \
	I0311 04:21:31.192406    4187 kubeadm.go:309] 	--control-plane 
	I0311 04:21:31.192410    4187 kubeadm.go:309] 
	I0311 04:21:31.192447    4187 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 04:21:31.192450    4187 kubeadm.go:309] 
	I0311 04:21:31.192502    4187 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token aitobb.rtfe8rta363qoqrs \
	I0311 04:21:31.192557    4187 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eab4234e68e1b3b03f0f7cf9465283cfbf6e9f9754481cb8053f1f94c272ad6e 
	I0311 04:21:31.192743    4187 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 04:21:31.192753    4187 cni.go:84] Creating CNI manager for ""
	I0311 04:21:31.192761    4187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:21:31.200563    4187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 04:21:31.204550    4187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 04:21:31.207632    4187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 04:21:31.212513    4187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 04:21:31.212561    4187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 04:21:31.212569    4187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-629000 minikube.k8s.io/updated_at=2024_03_11T04_21_31_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563 minikube.k8s.io/name=stopped-upgrade-629000 minikube.k8s.io/primary=true
	I0311 04:21:31.250075    4187 kubeadm.go:1106] duration metric: took 37.557458ms to wait for elevateKubeSystemPrivileges
	I0311 04:21:31.258895    4187 ops.go:34] apiserver oom_adj: -16
	W0311 04:21:31.258928    4187 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 04:21:31.258933    4187 kubeadm.go:393] duration metric: took 4m12.668276209s to StartCluster
	I0311 04:21:31.258943    4187 settings.go:142] acquiring lock: {Name:mk914df43a11d01b4609d1cefd86c6d6814b7b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:21:31.259032    4187 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:21:31.259480    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/kubeconfig: {Name:mka1b8ea0dffa0092f34d56c8689bdb2eb2631e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:21:31.259687    4187 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:21:31.263571    4187 out.go:177] * Verifying Kubernetes components...
	I0311 04:21:31.259698    4187 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 04:21:31.259768    4187 config.go:182] Loaded profile config "stopped-upgrade-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 04:21:31.270449    4187 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-629000"
	I0311 04:21:31.270466    4187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 04:21:31.270476    4187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-629000"
	I0311 04:21:31.270449    4187 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-629000"
	I0311 04:21:31.270507    4187 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-629000"
	W0311 04:21:31.270511    4187 addons.go:243] addon storage-provisioner should already be in state true
	I0311 04:21:31.270545    4187 host.go:66] Checking if "stopped-upgrade-629000" exists ...
	I0311 04:21:31.271525    4187 kapi.go:59] client config for stopped-upgrade-629000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/profiles/stopped-upgrade-629000/client.key", CAFile:"/Users/jenkins/minikube-integration/18350-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f5bfd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 04:21:31.271643    4187 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-629000"
	W0311 04:21:31.271648    4187 addons.go:243] addon default-storageclass should already be in state true
	I0311 04:21:31.271655    4187 host.go:66] Checking if "stopped-upgrade-629000" exists ...
	I0311 04:21:31.276459    4187 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 04:21:31.280607    4187 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 04:21:31.280613    4187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 04:21:31.280619    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:21:31.281298    4187 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 04:21:31.281305    4187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 04:21:31.281309    4187 sshutil.go:53] new ssh client: &{IP:localhost Port:50310 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/stopped-upgrade-629000/id_rsa Username:docker}
	I0311 04:21:31.346700    4187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 04:21:31.351376    4187 api_server.go:52] waiting for apiserver process to appear ...
	I0311 04:21:31.351425    4187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 04:21:31.355260    4187 api_server.go:72] duration metric: took 95.566083ms to wait for apiserver process to appear ...
	I0311 04:21:31.355267    4187 api_server.go:88] waiting for apiserver healthz status ...
	I0311 04:21:31.355273    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:31.394093    4187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 04:21:31.397183    4187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 04:21:36.357316    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:36.357342    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:41.357770    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:41.357801    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:46.358107    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:46.358126    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:51.358922    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:51.358961    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:21:56.359646    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:21:56.359669    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:01.360489    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:01.360525    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0311 04:22:01.763662    4187 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0311 04:22:01.768995    4187 out.go:177] * Enabled addons: storage-provisioner
	I0311 04:22:01.776881    4187 addons.go:505] duration metric: took 30.518085375s for enable addons: enabled=[storage-provisioner]
	I0311 04:22:06.361410    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:06.361451    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:11.362824    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:11.362865    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:16.364661    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:16.364677    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:21.366685    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:21.366717    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:26.376439    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:26.376466    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:31.385804    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:31.385972    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:31.400085    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:22:31.400168    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:31.422805    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:22:31.422884    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:31.439860    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:22:31.439933    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:31.450703    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:22:31.450775    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:31.461801    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:22:31.461866    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:31.476403    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:22:31.476475    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:31.488032    4187 logs.go:276] 0 containers: []
	W0311 04:22:31.488045    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:31.488104    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:31.498547    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:22:31.498563    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:31.498570    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:31.533681    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:22:31.533693    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:22:31.545716    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:22:31.545729    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:22:31.557068    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:22:31.557079    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:22:31.568851    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:31.568861    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:31.591844    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:22:31.591858    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:31.603196    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:31.603206    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:31.638277    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:22:31.638291    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:22:31.652852    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:22:31.652863    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:22:31.669418    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:22:31.669431    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:22:31.684319    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:22:31.684332    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:22:31.703208    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:22:31.703217    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:22:31.714664    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:31.714676    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:34.224649    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:39.231300    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:39.231737    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:39.263572    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:22:39.263702    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:39.283129    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:22:39.283215    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:39.297416    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:22:39.297477    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:39.309474    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:22:39.309546    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:39.320765    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:22:39.320846    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:39.331842    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:22:39.331915    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:39.342032    4187 logs.go:276] 0 containers: []
	W0311 04:22:39.342044    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:39.342107    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:39.352219    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:22:39.352236    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:39.352243    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:39.386793    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:22:39.386806    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:22:39.399081    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:22:39.399092    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:22:39.416987    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:39.417000    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:39.421624    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:22:39.421631    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:22:39.440206    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:22:39.440218    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:22:39.455130    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:22:39.455142    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:22:39.469407    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:22:39.469421    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:22:39.484041    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:22:39.484055    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:22:39.500025    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:22:39.500039    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:22:39.511801    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:39.511816    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:39.535687    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:39.535696    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:39.570227    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:22:39.570240    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:42.085589    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:47.090623    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:47.090873    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:47.115637    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:22:47.115757    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:47.135428    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:22:47.135519    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:47.148049    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:22:47.148114    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:47.159012    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:22:47.159084    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:47.169646    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:22:47.169720    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:47.180197    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:22:47.180264    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:47.190598    4187 logs.go:276] 0 containers: []
	W0311 04:22:47.190611    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:47.190674    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:47.201186    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:22:47.201202    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:22:47.201208    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:22:47.213246    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:22:47.213260    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:22:47.224968    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:22:47.224979    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:22:47.236554    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:22:47.236570    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:22:47.247848    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:47.247862    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:47.270984    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:22:47.270991    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:47.283087    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:47.283100    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:47.287216    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:47.287226    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:47.324862    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:22:47.324876    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:22:47.344798    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:22:47.344810    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:22:47.358081    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:22:47.358096    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:22:47.372779    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:22:47.372790    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:22:47.390035    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:47.390046    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:49.925250    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:22:54.928960    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:22:54.929165    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:22:54.944356    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:22:54.944441    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:22:54.956608    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:22:54.956685    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:22:54.967740    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:22:54.967811    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:22:54.978196    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:22:54.978265    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:22:54.988947    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:22:54.989012    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:22:55.004709    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:22:55.004778    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:22:55.015738    4187 logs.go:276] 0 containers: []
	W0311 04:22:55.015749    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:22:55.015808    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:22:55.026582    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:22:55.026596    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:22:55.026601    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:22:55.030957    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:22:55.030964    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:22:55.046890    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:22:55.046901    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:22:55.060470    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:22:55.060482    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:22:55.076407    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:22:55.076417    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:22:55.087918    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:22:55.087928    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:22:55.100053    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:22:55.100066    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:22:55.133979    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:22:55.133990    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:22:55.170795    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:22:55.170812    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:22:55.182499    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:22:55.182515    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:22:55.194179    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:22:55.194190    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:22:55.211853    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:22:55.211863    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:22:55.223722    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:22:55.223736    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:22:57.749687    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:02.752731    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:02.752896    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:02.771940    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:02.772027    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:02.787278    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:02.787355    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:02.800433    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:02.800496    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:02.811152    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:02.811217    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:02.822838    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:02.822903    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:02.833879    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:02.833943    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:02.844267    4187 logs.go:276] 0 containers: []
	W0311 04:23:02.844280    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:02.844335    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:02.854404    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:02.854421    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:02.854426    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:02.890671    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:02.890692    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:02.895784    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:02.895794    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:02.910449    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:02.910461    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:02.925038    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:02.925048    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:02.939970    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:02.939980    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:02.951648    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:02.951662    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:02.975206    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:02.975214    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:03.009463    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:03.009475    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:03.023920    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:03.023929    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:03.038050    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:03.038062    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:03.049399    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:03.049410    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:03.073617    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:03.073628    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:05.589774    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:10.591597    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:10.591815    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:10.611076    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:10.611172    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:10.625609    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:10.625686    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:10.637542    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:10.637615    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:10.648308    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:10.648380    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:10.659581    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:10.659653    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:10.673768    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:10.673835    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:10.684644    4187 logs.go:276] 0 containers: []
	W0311 04:23:10.684655    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:10.684709    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:10.694803    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:10.694823    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:10.694828    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:10.712157    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:10.712168    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:10.723703    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:10.723715    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:10.728494    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:10.728502    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:10.743050    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:10.743061    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:10.761249    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:10.761261    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:10.776046    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:10.776057    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:10.787742    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:10.787753    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:10.810257    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:10.810267    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:10.821840    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:10.821851    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:10.845586    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:10.845596    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:10.879074    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:10.879082    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:10.913330    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:10.913344    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:13.431177    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:18.433704    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:18.433860    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:18.445150    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:18.445224    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:18.455935    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:18.456012    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:18.466111    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:18.466173    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:18.479672    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:18.479734    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:18.490505    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:18.490579    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:18.501342    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:18.501407    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:18.511617    4187 logs.go:276] 0 containers: []
	W0311 04:23:18.511630    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:18.511690    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:18.522118    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:18.522132    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:18.522137    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:18.526298    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:18.526305    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:18.537559    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:18.537575    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:18.549594    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:18.549607    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:18.567523    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:18.567533    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:18.579268    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:18.579279    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:18.613030    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:18.613037    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:18.657860    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:18.657871    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:18.672584    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:18.672594    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:18.686641    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:18.686652    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:18.698464    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:18.698474    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:18.713733    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:18.713746    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:18.741950    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:18.741967    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:21.259985    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:26.262497    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:26.262745    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:26.281675    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:26.281741    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:26.295167    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:26.295225    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:26.306123    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:26.306183    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:26.316368    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:26.316420    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:26.326890    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:26.326954    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:26.337666    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:26.337724    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:26.352201    4187 logs.go:276] 0 containers: []
	W0311 04:23:26.352215    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:26.352267    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:26.363270    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:26.363285    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:26.363290    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:26.375120    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:26.375130    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:26.410208    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:26.410220    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:26.423887    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:26.423900    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:26.435599    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:26.435609    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:26.447129    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:26.447140    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:26.461452    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:26.461464    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:26.476254    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:26.476265    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:26.493935    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:26.493944    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:26.498230    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:26.498239    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:26.531867    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:26.531881    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:26.546062    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:26.546073    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:26.570735    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:26.570750    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:29.089519    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:34.090188    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:34.090349    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:34.109977    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:34.110066    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:34.123488    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:34.123565    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:34.134515    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:34.134585    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:34.145048    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:34.145116    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:34.155618    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:34.155689    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:34.171000    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:34.171072    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:34.181580    4187 logs.go:276] 0 containers: []
	W0311 04:23:34.181592    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:34.181648    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:34.192597    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:34.192616    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:34.192621    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:34.211090    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:34.211101    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:34.224857    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:34.224870    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:34.236237    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:34.236249    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:34.247839    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:34.247851    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:34.259487    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:34.259498    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:34.263612    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:34.263621    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:34.298612    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:34.298625    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:34.316416    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:34.316433    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:34.334165    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:34.334175    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:34.359458    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:34.359468    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:34.370512    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:34.370524    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:34.405512    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:34.405518    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:36.919205    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:41.921418    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:41.921642    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:41.943646    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:41.943749    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:41.959591    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:41.959673    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:41.972114    4187 logs.go:276] 2 containers: [2bf12152725a 4d26bbfa384d]
	I0311 04:23:41.972189    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:41.983108    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:41.983176    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:41.993452    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:41.993527    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:42.004040    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:42.004109    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:42.013992    4187 logs.go:276] 0 containers: []
	W0311 04:23:42.014006    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:42.014063    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:42.024601    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:42.024620    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:42.024625    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:42.036296    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:42.036306    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:42.051440    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:42.051452    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:42.063397    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:42.063407    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:42.081301    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:42.081312    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:42.116687    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:42.116698    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:42.152496    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:42.152507    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:42.166560    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:42.166575    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:42.178927    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:42.178939    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:42.190880    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:42.190893    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:42.195551    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:42.195558    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:42.210157    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:42.210171    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:42.234002    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:42.234010    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:44.747356    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:49.749668    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:49.749884    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:49.771552    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:49.771677    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:49.786866    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:49.786954    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:49.799703    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:23:49.799783    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:49.810447    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:49.810516    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:49.820602    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:49.820674    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:49.831207    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:49.831277    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:49.841552    4187 logs.go:276] 0 containers: []
	W0311 04:23:49.841564    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:49.841617    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:49.851873    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:49.851890    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:49.851894    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:49.885919    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:49.885927    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:49.897712    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:49.897722    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:49.911076    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:49.911087    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:23:49.922969    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:49.922980    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:49.938195    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:49.938206    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:49.966176    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:49.966188    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:49.977711    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:49.977725    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:49.981992    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:49.982000    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:50.016292    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:23:50.016304    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:23:50.027712    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:50.027723    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:50.039638    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:50.039650    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:50.057030    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:50.057041    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:50.068301    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:50.068315    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:50.089986    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:23:50.090000    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:23:52.603296    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:23:57.605524    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:23:57.605714    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:23:57.620033    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:23:57.620110    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:23:57.632064    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:23:57.632145    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:23:57.643361    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:23:57.643435    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:23:57.654772    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:23:57.654834    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:23:57.665232    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:23:57.665300    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:23:57.676310    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:23:57.676374    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:23:57.687139    4187 logs.go:276] 0 containers: []
	W0311 04:23:57.687154    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:23:57.687204    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:23:57.698189    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:23:57.698204    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:23:57.698209    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:23:57.731823    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:23:57.731832    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:23:57.747120    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:23:57.747129    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:23:57.758823    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:23:57.758833    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:23:57.762903    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:23:57.762913    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:23:57.774719    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:23:57.774731    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:23:57.792181    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:23:57.792192    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:23:57.816067    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:23:57.816076    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:23:57.830769    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:23:57.830779    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:23:57.843183    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:23:57.843194    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:23:57.860090    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:23:57.860101    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:23:57.871367    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:23:57.871378    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:23:57.883910    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:23:57.883921    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:23:57.919506    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:23:57.919516    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:23:57.935303    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:23:57.935316    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:00.450368    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:05.452478    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:05.452576    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:05.465232    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:05.465299    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:05.476996    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:05.477063    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:05.489552    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:05.489624    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:05.500971    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:05.501050    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:05.512577    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:05.512645    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:05.524070    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:05.524136    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:05.536220    4187 logs.go:276] 0 containers: []
	W0311 04:24:05.536233    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:05.536289    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:05.548202    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:05.548220    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:05.548225    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:05.563783    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:05.563796    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:05.577040    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:05.577052    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:05.591680    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:05.591693    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:05.604959    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:05.604972    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:05.620012    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:05.620026    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:05.632842    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:05.632856    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:05.646592    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:05.646609    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:05.681762    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:05.681782    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:05.697244    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:05.697259    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:05.716112    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:05.716127    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:05.731663    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:05.731674    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:05.758416    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:05.758433    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:05.764020    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:05.764037    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:05.804764    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:05.804777    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:08.323469    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:13.325595    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:13.325686    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:13.337011    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:13.337092    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:13.347828    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:13.347892    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:13.358263    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:13.358332    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:13.368610    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:13.368680    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:13.379783    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:13.379850    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:13.390539    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:13.390604    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:13.406522    4187 logs.go:276] 0 containers: []
	W0311 04:24:13.406533    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:13.406594    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:13.416937    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:13.416956    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:13.416964    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:13.460442    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:13.460456    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:13.472903    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:13.472915    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:13.485004    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:13.485017    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:13.505104    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:13.505116    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:13.517040    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:13.517052    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:13.552155    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:13.552165    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:13.556615    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:13.556623    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:13.570091    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:13.570103    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:13.582525    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:13.582538    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:13.599859    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:13.599870    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:13.613864    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:13.613873    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:13.625573    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:13.625584    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:13.650339    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:13.650350    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:13.662643    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:13.662655    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:16.176941    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:21.179183    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:21.179412    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:21.195751    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:21.195832    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:21.207113    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:21.207188    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:21.217409    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:21.217479    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:21.228553    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:21.228618    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:21.242709    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:21.242774    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:21.253485    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:21.253548    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:21.263711    4187 logs.go:276] 0 containers: []
	W0311 04:24:21.263721    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:21.263773    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:21.274452    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:21.274470    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:21.274476    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:21.278786    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:21.278794    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:21.292680    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:21.292690    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:21.304956    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:21.304971    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:21.316134    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:21.316145    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:21.331718    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:21.331729    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:21.343590    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:21.343600    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:21.363618    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:21.363630    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:21.378083    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:21.378093    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:21.389866    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:21.389877    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:21.413469    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:21.413478    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:21.424558    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:21.424568    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:21.458527    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:21.458534    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:21.493434    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:21.493447    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:21.506355    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:21.506373    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:24.020561    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:29.023046    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:29.023232    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:29.039621    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:29.039706    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:29.052453    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:29.052532    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:29.070993    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:29.071070    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:29.104872    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:29.104944    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:29.114858    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:29.114919    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:29.125665    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:29.125729    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:29.136037    4187 logs.go:276] 0 containers: []
	W0311 04:24:29.136049    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:29.136104    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:29.146139    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:29.146154    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:29.146159    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:29.180734    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:29.180747    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:29.195199    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:29.195219    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:29.210662    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:29.210673    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:29.222288    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:29.222300    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:29.226515    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:29.226524    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:29.247180    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:29.247190    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:29.258581    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:29.258592    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:29.270723    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:29.270733    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:29.286982    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:29.286990    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:29.298572    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:29.298583    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:29.333575    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:29.333587    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:29.346486    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:29.346498    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:29.361930    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:29.361942    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:29.388796    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:29.388808    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:31.917698    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:36.920186    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:36.920365    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:36.935349    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:36.935435    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:36.947215    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:36.947281    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:36.958273    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:36.958339    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:36.968773    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:36.968843    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:36.979660    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:36.979725    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:36.990482    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:36.990546    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:37.001272    4187 logs.go:276] 0 containers: []
	W0311 04:24:37.001287    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:37.001340    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:37.012379    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:37.012398    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:37.012404    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:37.048056    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:37.048070    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:37.059962    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:37.059974    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:37.083607    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:37.083616    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:37.116702    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:37.116710    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:37.121297    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:37.121303    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:37.136112    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:37.136126    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:37.149685    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:37.149697    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:37.161310    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:37.161321    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:37.172751    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:37.172761    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:37.187691    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:37.187703    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:37.200810    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:37.200821    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:37.228227    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:37.228241    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:37.244160    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:37.244171    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:37.268258    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:37.268269    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:39.783190    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:44.783646    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:44.783881    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:44.808904    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:44.809003    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:44.823826    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:44.823896    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:44.836128    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:44.836204    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:44.847007    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:44.847077    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:44.857259    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:44.857328    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:44.868151    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:44.868226    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:44.878743    4187 logs.go:276] 0 containers: []
	W0311 04:24:44.878754    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:44.878808    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:44.889099    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:44.889112    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:44.889117    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:44.911021    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:44.911031    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:44.923016    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:44.923027    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:44.940679    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:44.940687    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:44.945291    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:44.945300    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:44.959609    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:44.959624    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:44.984584    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:44.984593    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:44.995957    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:44.995968    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:45.011336    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:45.011346    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:45.045249    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:45.045260    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:45.060256    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:45.060264    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:45.073559    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:45.073574    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:45.087168    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:45.087177    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:45.123955    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:45.123969    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:45.136895    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:45.136908    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:47.652416    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:24:52.654742    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:24:52.655215    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:24:52.692471    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:24:52.692603    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:24:52.712551    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:24:52.712648    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:24:52.727870    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:24:52.727951    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:24:52.741295    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:24:52.741366    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:24:52.751887    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:24:52.751959    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:24:52.762870    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:24:52.762940    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:24:52.773122    4187 logs.go:276] 0 containers: []
	W0311 04:24:52.773133    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:24:52.773191    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:24:52.785557    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:24:52.785575    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:24:52.785582    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:24:52.820987    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:24:52.820999    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:24:52.833204    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:24:52.833217    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:24:52.845573    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:24:52.845585    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:24:52.858601    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:24:52.858613    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:24:52.874068    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:24:52.874081    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:24:52.890763    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:24:52.890778    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:24:52.902607    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:24:52.902616    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:24:52.919721    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:24:52.919731    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:24:52.932669    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:24:52.932681    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:24:52.960484    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:24:52.960494    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:24:52.986186    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:24:52.986200    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:24:53.021769    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:24:53.021783    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:24:53.026474    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:24:53.026486    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:24:53.041504    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:24:53.041512    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:24:55.558877    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:25:00.560146    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:25:00.560306    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:25:00.571060    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:25:00.571135    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:25:00.581612    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:25:00.581670    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:25:00.592159    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:25:00.592223    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:25:00.602952    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:25:00.603009    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:25:00.613530    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:25:00.613590    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:25:00.631003    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:25:00.631060    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:25:00.641495    4187 logs.go:276] 0 containers: []
	W0311 04:25:00.641509    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:25:00.641567    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:25:00.652171    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:25:00.652189    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:25:00.652194    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:25:00.663713    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:25:00.663723    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:25:00.687347    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:25:00.687355    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:25:00.691657    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:25:00.691667    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:25:00.703538    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:25:00.703549    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:25:00.715226    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:25:00.715237    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:25:00.726041    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:25:00.726051    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:25:00.739889    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:25:00.739903    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:25:00.751765    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:25:00.751775    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:25:00.787620    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:25:00.787629    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:25:00.806003    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:25:00.806012    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:25:00.822495    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:25:00.822513    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:25:00.840574    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:25:00.840587    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:25:00.863603    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:25:00.863635    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:25:00.902150    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:25:00.902168    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:25:03.420426    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:25:08.422468    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:25:08.422849    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:25:08.472013    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:25:08.472150    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:25:08.496237    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:25:08.496330    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:25:08.515899    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:25:08.515988    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:25:08.540641    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:25:08.540711    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:25:08.552565    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:25:08.552631    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:25:08.564415    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:25:08.564482    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:25:08.574651    4187 logs.go:276] 0 containers: []
	W0311 04:25:08.574661    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:25:08.574709    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:25:08.585032    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:25:08.585050    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:25:08.585055    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:25:08.621845    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:25:08.621858    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:25:08.634166    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:25:08.634181    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:25:08.651447    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:25:08.651457    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:25:08.663695    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:25:08.663707    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:25:08.677430    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:25:08.677441    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:25:08.691648    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:25:08.691659    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:25:08.710597    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:25:08.710607    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:25:08.723077    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:25:08.723092    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:25:08.758513    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:25:08.758525    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:25:08.762756    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:25:08.762762    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:25:08.776942    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:25:08.776952    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:25:08.788773    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:25:08.788783    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:25:08.800938    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:25:08.800948    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:25:08.825733    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:25:08.825744    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:25:11.339318    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:25:16.341545    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:25:16.341755    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:25:16.357349    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:25:16.357437    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:25:16.370574    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:25:16.370640    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:25:16.387802    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:25:16.387877    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:25:16.398395    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:25:16.398459    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:25:16.408853    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:25:16.408922    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:25:16.419485    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:25:16.419548    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:25:16.430885    4187 logs.go:276] 0 containers: []
	W0311 04:25:16.430896    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:25:16.430957    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:25:16.441727    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:25:16.441744    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:25:16.441749    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:25:16.456375    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:25:16.456385    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:25:16.470503    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:25:16.470513    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:25:16.482270    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:25:16.482281    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:25:16.517933    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:25:16.517949    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:25:16.529277    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:25:16.529289    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:25:16.541589    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:25:16.541602    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:25:16.560464    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:25:16.560478    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:25:16.587231    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:25:16.587245    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:25:16.598686    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:25:16.598697    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:25:16.618214    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:25:16.618229    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:25:16.640532    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:25:16.640553    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:25:16.666866    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:25:16.666903    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:25:16.675859    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:25:16.675883    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:25:16.727181    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:25:16.727191    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:25:19.243558    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:25:24.245897    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:25:24.246237    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 04:25:24.281840    4187 logs.go:276] 1 containers: [f1074f516e72]
	I0311 04:25:24.281944    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 04:25:24.298164    4187 logs.go:276] 1 containers: [cf5dcb5c359b]
	I0311 04:25:24.298235    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 04:25:24.311851    4187 logs.go:276] 4 containers: [aab4a91de15b 6bb4762b423c 2bf12152725a 4d26bbfa384d]
	I0311 04:25:24.311929    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 04:25:24.323151    4187 logs.go:276] 1 containers: [a0ae45f47020]
	I0311 04:25:24.323217    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 04:25:24.333802    4187 logs.go:276] 1 containers: [6a1cedac2200]
	I0311 04:25:24.333868    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 04:25:24.345066    4187 logs.go:276] 1 containers: [fb246c9f163b]
	I0311 04:25:24.345123    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 04:25:24.355163    4187 logs.go:276] 0 containers: []
	W0311 04:25:24.355175    4187 logs.go:278] No container was found matching "kindnet"
	I0311 04:25:24.355234    4187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 04:25:24.365921    4187 logs.go:276] 1 containers: [9d6ab045d7f3]
	I0311 04:25:24.365938    4187 logs.go:123] Gathering logs for coredns [aab4a91de15b] ...
	I0311 04:25:24.365943    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aab4a91de15b"
	I0311 04:25:24.377490    4187 logs.go:123] Gathering logs for coredns [2bf12152725a] ...
	I0311 04:25:24.377505    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bf12152725a"
	I0311 04:25:24.389429    4187 logs.go:123] Gathering logs for kube-scheduler [a0ae45f47020] ...
	I0311 04:25:24.389442    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ae45f47020"
	I0311 04:25:24.405424    4187 logs.go:123] Gathering logs for kube-controller-manager [fb246c9f163b] ...
	I0311 04:25:24.405435    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb246c9f163b"
	I0311 04:25:24.422811    4187 logs.go:123] Gathering logs for Docker ...
	I0311 04:25:24.422821    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 04:25:24.445540    4187 logs.go:123] Gathering logs for kubelet ...
	I0311 04:25:24.445548    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 04:25:24.478809    4187 logs.go:123] Gathering logs for dmesg ...
	I0311 04:25:24.478819    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 04:25:24.483328    4187 logs.go:123] Gathering logs for describe nodes ...
	I0311 04:25:24.483337    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 04:25:24.519073    4187 logs.go:123] Gathering logs for etcd [cf5dcb5c359b] ...
	I0311 04:25:24.519084    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf5dcb5c359b"
	I0311 04:25:24.534374    4187 logs.go:123] Gathering logs for coredns [6bb4762b423c] ...
	I0311 04:25:24.534387    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bb4762b423c"
	I0311 04:25:24.547625    4187 logs.go:123] Gathering logs for coredns [4d26bbfa384d] ...
	I0311 04:25:24.547638    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26bbfa384d"
	I0311 04:25:24.560300    4187 logs.go:123] Gathering logs for kube-proxy [6a1cedac2200] ...
	I0311 04:25:24.560311    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1cedac2200"
	I0311 04:25:24.572180    4187 logs.go:123] Gathering logs for kube-apiserver [f1074f516e72] ...
	I0311 04:25:24.572195    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1074f516e72"
	I0311 04:25:24.590337    4187 logs.go:123] Gathering logs for storage-provisioner [9d6ab045d7f3] ...
	I0311 04:25:24.590346    4187 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d6ab045d7f3"
	I0311 04:25:24.602348    4187 logs.go:123] Gathering logs for container status ...
	I0311 04:25:24.602359    4187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 04:25:27.116534    4187 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 04:25:32.118892    4187 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 04:25:32.123660    4187 out.go:177] 
	W0311 04:25:32.127653    4187 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0311 04:25:32.127665    4187 out.go:239] * 
	* 
	W0311 04:25:32.128895    4187 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:25:32.138626    4187 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-629000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (637.79s)

                                                
                                    
x
+
TestPause/serial/Start (9.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-366000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0311 04:25:35.782661    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-366000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.806821667s)

                                                
                                                
-- stdout --
	* [pause-366000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-366000" primary control-plane node in "pause-366000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-366000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-366000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-366000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-366000 -n pause-366000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-366000 -n pause-366000: exit status 7 (52.761375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-366000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-886000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-886000 --driver=qemu2 : exit status 80 (9.766529208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-886000" primary control-plane node in "NoKubernetes-886000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-886000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-886000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-886000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-886000 -n NoKubernetes-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-886000 -n NoKubernetes-886000: exit status 7 (70.025041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-886000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-886000 --no-kubernetes --driver=qemu2 : exit status 80 (5.840451083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-886000
	* Restarting existing qemu2 VM for "NoKubernetes-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-886000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-886000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-886000 -n NoKubernetes-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-886000 -n NoKubernetes-886000: exit status 7 (63.724125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-886000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-886000 --no-kubernetes --driver=qemu2 : exit status 80 (6.698334583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-886000
	* Restarting existing qemu2 VM for "NoKubernetes-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-886000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-886000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-886000 -n NoKubernetes-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-886000 -n NoKubernetes-886000: exit status 7 (68.791916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (6.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-886000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-886000 --driver=qemu2 : exit status 80 (5.903893625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-886000
	* Restarting existing qemu2 VM for "NoKubernetes-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-886000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-886000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-886000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-886000 -n NoKubernetes-886000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-886000 -n NoKubernetes-886000: exit status 7 (39.427167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-886000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.94s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.69s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.69s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.63s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18350
- KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3810563789/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.862261209s)

                                                
                                                
-- stdout --
	* [auto-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-896000" primary control-plane node in "auto-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:26:50.725645    4669 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:26:50.725787    4669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:26:50.725791    4669 out.go:304] Setting ErrFile to fd 2...
	I0311 04:26:50.725793    4669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:26:50.725921    4669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:26:50.727026    4669 out.go:298] Setting JSON to false
	I0311 04:26:50.744770    4669 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3382,"bootTime":1710153028,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:26:50.744827    4669 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:26:50.750877    4669 out.go:177] * [auto-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:26:50.757741    4669 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:26:50.761788    4669 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:26:50.757843    4669 notify.go:220] Checking for updates...
	I0311 04:26:50.766698    4669 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:26:50.773706    4669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:26:50.776806    4669 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:26:50.783804    4669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:26:50.788054    4669 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:26:50.788125    4669 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:26:50.788169    4669 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:26:50.792765    4669 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:26:50.804739    4669 start.go:297] selected driver: qemu2
	I0311 04:26:50.804748    4669 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:26:50.804755    4669 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:26:50.807182    4669 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:26:50.810738    4669 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:26:50.813865    4669 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:26:50.813919    4669 cni.go:84] Creating CNI manager for ""
	I0311 04:26:50.813929    4669 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:26:50.813940    4669 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:26:50.813965    4669 start.go:340] cluster config:
	{Name:auto-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:26:50.819482    4669 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:26:50.826756    4669 out.go:177] * Starting "auto-896000" primary control-plane node in "auto-896000" cluster
	I0311 04:26:50.830640    4669 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:26:50.830656    4669 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:26:50.830665    4669 cache.go:56] Caching tarball of preloaded images
	I0311 04:26:50.830749    4669 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:26:50.830757    4669 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:26:50.830832    4669 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/auto-896000/config.json ...
	I0311 04:26:50.830846    4669 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/auto-896000/config.json: {Name:mk46e779c525cb43125b54b5afd43f0d66c8a052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:26:50.831077    4669 start.go:360] acquireMachinesLock for auto-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:26:50.831112    4669 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "auto-896000"
	I0311 04:26:50.831127    4669 start.go:93] Provisioning new machine with config: &{Name:auto-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:26:50.831166    4669 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:26:50.837718    4669 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:26:50.857625    4669 start.go:159] libmachine.API.Create for "auto-896000" (driver="qemu2")
	I0311 04:26:50.857659    4669 client.go:168] LocalClient.Create starting
	I0311 04:26:50.857729    4669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:26:50.857764    4669 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:50.857776    4669 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:50.857827    4669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:26:50.857853    4669 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:50.857859    4669 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:50.858285    4669 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:26:50.999642    4669 main.go:141] libmachine: Creating SSH key...
	I0311 04:26:51.042946    4669 main.go:141] libmachine: Creating Disk image...
	I0311 04:26:51.042951    4669 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:26:51.043112    4669 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2
	I0311 04:26:51.055151    4669 main.go:141] libmachine: STDOUT: 
	I0311 04:26:51.055173    4669 main.go:141] libmachine: STDERR: 
	I0311 04:26:51.055220    4669 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2 +20000M
	I0311 04:26:51.065701    4669 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:26:51.065715    4669 main.go:141] libmachine: STDERR: 
	I0311 04:26:51.065732    4669 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2
	I0311 04:26:51.065737    4669 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:26:51.065769    4669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:e9:b6:75:21:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2
	I0311 04:26:51.067446    4669 main.go:141] libmachine: STDOUT: 
	I0311 04:26:51.067467    4669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:26:51.067487    4669 client.go:171] duration metric: took 209.827ms to LocalClient.Create
	I0311 04:26:53.068877    4669 start.go:128] duration metric: took 2.237721875s to createHost
	I0311 04:26:53.069025    4669 start.go:83] releasing machines lock for "auto-896000", held for 2.237903667s
	W0311 04:26:53.069071    4669 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:53.081100    4669 out.go:177] * Deleting "auto-896000" in qemu2 ...
	W0311 04:26:53.107249    4669 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:26:53.107286    4669 start.go:728] Will try again in 5 seconds ...
	I0311 04:26:58.109380    4669 start.go:360] acquireMachinesLock for auto-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:26:58.109847    4669 start.go:364] duration metric: took 359.458µs to acquireMachinesLock for "auto-896000"
	I0311 04:26:58.110016    4669 start.go:93] Provisioning new machine with config: &{Name:auto-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:26:58.110328    4669 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:26:58.121073    4669 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:26:58.170199    4669 start.go:159] libmachine.API.Create for "auto-896000" (driver="qemu2")
	I0311 04:26:58.170245    4669 client.go:168] LocalClient.Create starting
	I0311 04:26:58.170354    4669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:26:58.170416    4669 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:58.170432    4669 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:58.170500    4669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:26:58.170549    4669 main.go:141] libmachine: Decoding PEM data...
	I0311 04:26:58.170562    4669 main.go:141] libmachine: Parsing certificate...
	I0311 04:26:58.171098    4669 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:26:58.322862    4669 main.go:141] libmachine: Creating SSH key...
	I0311 04:26:58.483673    4669 main.go:141] libmachine: Creating Disk image...
	I0311 04:26:58.483680    4669 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:26:58.483871    4669 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2
	I0311 04:26:58.496658    4669 main.go:141] libmachine: STDOUT: 
	I0311 04:26:58.496683    4669 main.go:141] libmachine: STDERR: 
	I0311 04:26:58.496746    4669 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2 +20000M
	I0311 04:26:58.508161    4669 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:26:58.508178    4669 main.go:141] libmachine: STDERR: 
	I0311 04:26:58.508192    4669 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2
	I0311 04:26:58.508197    4669 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:26:58.508232    4669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:eb:40:1c:aa:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/auto-896000/disk.qcow2
	I0311 04:26:58.510148    4669 main.go:141] libmachine: STDOUT: 
	I0311 04:26:58.510164    4669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:26:58.510176    4669 client.go:171] duration metric: took 339.933834ms to LocalClient.Create
	I0311 04:27:00.512322    4669 start.go:128] duration metric: took 2.402010291s to createHost
	I0311 04:27:00.512399    4669 start.go:83] releasing machines lock for "auto-896000", held for 2.402532458s
	W0311 04:27:00.512745    4669 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:00.525390    4669 out.go:177] 
	W0311 04:27:00.529521    4669 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:27:00.529544    4669 out.go:239] * 
	* 
	W0311 04:27:00.532230    4669 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:27:00.541407    4669 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.015464458s)

                                                
                                                
-- stdout --
	* [kindnet-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-896000" primary control-plane node in "kindnet-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:27:02.823553    4784 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:27:02.823669    4784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:02.823673    4784 out.go:304] Setting ErrFile to fd 2...
	I0311 04:27:02.823676    4784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:02.823805    4784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:27:02.824889    4784 out.go:298] Setting JSON to false
	I0311 04:27:02.841225    4784 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3394,"bootTime":1710153028,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:27:02.841291    4784 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:27:02.847276    4784 out.go:177] * [kindnet-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:27:02.854224    4784 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:27:02.854280    4784 notify.go:220] Checking for updates...
	I0311 04:27:02.857299    4784 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:27:02.860221    4784 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:27:02.863209    4784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:27:02.866297    4784 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:27:02.869148    4784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:27:02.872559    4784 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:02.872626    4784 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:02.872679    4784 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:27:02.877212    4784 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:27:02.884203    4784 start.go:297] selected driver: qemu2
	I0311 04:27:02.884209    4784 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:27:02.884215    4784 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:27:02.886559    4784 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:27:02.890224    4784 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:27:02.893278    4784 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:27:02.893337    4784 cni.go:84] Creating CNI manager for "kindnet"
	I0311 04:27:02.893342    4784 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 04:27:02.893374    4784 start.go:340] cluster config:
	{Name:kindnet-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:27:02.897788    4784 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:27:02.905251    4784 out.go:177] * Starting "kindnet-896000" primary control-plane node in "kindnet-896000" cluster
	I0311 04:27:02.909163    4784 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:27:02.909180    4784 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:27:02.909191    4784 cache.go:56] Caching tarball of preloaded images
	I0311 04:27:02.909260    4784 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:27:02.909267    4784 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:27:02.909339    4784 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/kindnet-896000/config.json ...
	I0311 04:27:02.909352    4784 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/kindnet-896000/config.json: {Name:mk3e9dae46a01f3eb6681cc9ee7c4b07a1d7af80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:27:02.909567    4784 start.go:360] acquireMachinesLock for kindnet-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:02.909599    4784 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "kindnet-896000"
	I0311 04:27:02.909611    4784 start.go:93] Provisioning new machine with config: &{Name:kindnet-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:02.909638    4784 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:02.918236    4784 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:02.936669    4784 start.go:159] libmachine.API.Create for "kindnet-896000" (driver="qemu2")
	I0311 04:27:02.936695    4784 client.go:168] LocalClient.Create starting
	I0311 04:27:02.936757    4784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:02.936794    4784 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:02.936806    4784 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:02.936858    4784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:02.936881    4784 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:02.936891    4784 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:02.937336    4784 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:03.077202    4784 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:03.298313    4784 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:03.298321    4784 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:03.298516    4784 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2
	I0311 04:27:03.311288    4784 main.go:141] libmachine: STDOUT: 
	I0311 04:27:03.311314    4784 main.go:141] libmachine: STDERR: 
	I0311 04:27:03.311375    4784 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2 +20000M
	I0311 04:27:03.322062    4784 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:03.322077    4784 main.go:141] libmachine: STDERR: 
	I0311 04:27:03.322096    4784 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2
	I0311 04:27:03.322101    4784 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:03.322133    4784 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:9e:b5:e1:63:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2
	I0311 04:27:03.323944    4784 main.go:141] libmachine: STDOUT: 
	I0311 04:27:03.323964    4784 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:03.323984    4784 client.go:171] duration metric: took 387.293125ms to LocalClient.Create
	I0311 04:27:05.324343    4784 start.go:128] duration metric: took 2.414697375s to createHost
	I0311 04:27:05.324433    4784 start.go:83] releasing machines lock for "kindnet-896000", held for 2.414876458s
	W0311 04:27:05.324485    4784 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:05.343651    4784 out.go:177] * Deleting "kindnet-896000" in qemu2 ...
	W0311 04:27:05.371350    4784 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:05.371388    4784 start.go:728] Will try again in 5 seconds ...
	I0311 04:27:10.373567    4784 start.go:360] acquireMachinesLock for kindnet-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:10.374116    4784 start.go:364] duration metric: took 400.333µs to acquireMachinesLock for "kindnet-896000"
	I0311 04:27:10.374266    4784 start.go:93] Provisioning new machine with config: &{Name:kindnet-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:10.374532    4784 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:10.384188    4784 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:10.433718    4784 start.go:159] libmachine.API.Create for "kindnet-896000" (driver="qemu2")
	I0311 04:27:10.433773    4784 client.go:168] LocalClient.Create starting
	I0311 04:27:10.433886    4784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:10.433955    4784 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:10.433973    4784 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:10.434034    4784 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:10.434078    4784 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:10.434090    4784 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:10.434641    4784 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:10.586333    4784 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:10.737494    4784 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:10.737500    4784 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:10.737680    4784 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2
	I0311 04:27:10.750363    4784 main.go:141] libmachine: STDOUT: 
	I0311 04:27:10.750384    4784 main.go:141] libmachine: STDERR: 
	I0311 04:27:10.750443    4784 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2 +20000M
	I0311 04:27:10.761162    4784 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:10.761181    4784 main.go:141] libmachine: STDERR: 
	I0311 04:27:10.761196    4784 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2
	I0311 04:27:10.761200    4784 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:10.761229    4784 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:f2:c6:fe:f7:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kindnet-896000/disk.qcow2
	I0311 04:27:10.762991    4784 main.go:141] libmachine: STDOUT: 
	I0311 04:27:10.763007    4784 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:10.763021    4784 client.go:171] duration metric: took 329.248708ms to LocalClient.Create
	I0311 04:27:12.765152    4784 start.go:128] duration metric: took 2.390645333s to createHost
	I0311 04:27:12.765198    4784 start.go:83] releasing machines lock for "kindnet-896000", held for 2.391110666s
	W0311 04:27:12.765518    4784 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:12.774865    4784 out.go:177] 
	W0311 04:27:12.781124    4784 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:27:12.781160    4784 out.go:239] * 
	* 
	W0311 04:27:12.783540    4784 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:27:12.793025    4784 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.875763791s)

                                                
                                                
-- stdout --
	* [calico-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-896000" primary control-plane node in "calico-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:27:15.159547    4898 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:27:15.159684    4898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:15.159688    4898 out.go:304] Setting ErrFile to fd 2...
	I0311 04:27:15.159690    4898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:15.159822    4898 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:27:15.160898    4898 out.go:298] Setting JSON to false
	I0311 04:27:15.177061    4898 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3407,"bootTime":1710153028,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:27:15.177126    4898 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:27:15.183411    4898 out.go:177] * [calico-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:27:15.189339    4898 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:27:15.189419    4898 notify.go:220] Checking for updates...
	I0311 04:27:15.197383    4898 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:27:15.200349    4898 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:27:15.203352    4898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:27:15.206358    4898 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:27:15.207885    4898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:27:15.211688    4898 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:15.211751    4898 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:15.211805    4898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:27:15.216350    4898 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:27:15.222333    4898 start.go:297] selected driver: qemu2
	I0311 04:27:15.222340    4898 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:27:15.222345    4898 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:27:15.224594    4898 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:27:15.227347    4898 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:27:15.230450    4898 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:27:15.230490    4898 cni.go:84] Creating CNI manager for "calico"
	I0311 04:27:15.230495    4898 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0311 04:27:15.230526    4898 start.go:340] cluster config:
	{Name:calico-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:27:15.234944    4898 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:27:15.242379    4898 out.go:177] * Starting "calico-896000" primary control-plane node in "calico-896000" cluster
	I0311 04:27:15.246348    4898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:27:15.246362    4898 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:27:15.246373    4898 cache.go:56] Caching tarball of preloaded images
	I0311 04:27:15.246435    4898 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:27:15.246450    4898 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:27:15.246529    4898 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/calico-896000/config.json ...
	I0311 04:27:15.246542    4898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/calico-896000/config.json: {Name:mk905f06a9d39599f3808ec43d0ff7812b9caba5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:27:15.246781    4898 start.go:360] acquireMachinesLock for calico-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:15.246814    4898 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "calico-896000"
	I0311 04:27:15.246825    4898 start.go:93] Provisioning new machine with config: &{Name:calico-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:15.246861    4898 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:15.255256    4898 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:15.273568    4898 start.go:159] libmachine.API.Create for "calico-896000" (driver="qemu2")
	I0311 04:27:15.273608    4898 client.go:168] LocalClient.Create starting
	I0311 04:27:15.273662    4898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:15.273693    4898 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:15.273702    4898 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:15.273750    4898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:15.273772    4898 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:15.273779    4898 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:15.274156    4898 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:15.418015    4898 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:15.530927    4898 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:15.530933    4898 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:15.531111    4898 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2
	I0311 04:27:15.543483    4898 main.go:141] libmachine: STDOUT: 
	I0311 04:27:15.543504    4898 main.go:141] libmachine: STDERR: 
	I0311 04:27:15.543556    4898 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2 +20000M
	I0311 04:27:15.554147    4898 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:15.554165    4898 main.go:141] libmachine: STDERR: 
	I0311 04:27:15.554180    4898 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2
	I0311 04:27:15.554184    4898 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:15.554214    4898 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e4:a9:53:d8:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2
	I0311 04:27:15.555967    4898 main.go:141] libmachine: STDOUT: 
	I0311 04:27:15.555981    4898 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:15.555999    4898 client.go:171] duration metric: took 282.391792ms to LocalClient.Create
	I0311 04:27:17.558191    4898 start.go:128] duration metric: took 2.311354792s to createHost
	I0311 04:27:17.558250    4898 start.go:83] releasing machines lock for "calico-896000", held for 2.311476208s
	W0311 04:27:17.558301    4898 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:17.574393    4898 out.go:177] * Deleting "calico-896000" in qemu2 ...
	W0311 04:27:17.601837    4898 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:17.601919    4898 start.go:728] Will try again in 5 seconds ...
	I0311 04:27:22.602374    4898 start.go:360] acquireMachinesLock for calico-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:22.602770    4898 start.go:364] duration metric: took 303.875µs to acquireMachinesLock for "calico-896000"
	I0311 04:27:22.602891    4898 start.go:93] Provisioning new machine with config: &{Name:calico-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:22.603193    4898 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:22.619089    4898 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:22.671242    4898 start.go:159] libmachine.API.Create for "calico-896000" (driver="qemu2")
	I0311 04:27:22.671299    4898 client.go:168] LocalClient.Create starting
	I0311 04:27:22.671414    4898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:22.671481    4898 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:22.671505    4898 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:22.671599    4898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:22.671647    4898 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:22.671666    4898 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:22.672169    4898 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:22.825777    4898 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:22.934025    4898 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:22.934030    4898 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:22.934204    4898 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2
	I0311 04:27:22.946931    4898 main.go:141] libmachine: STDOUT: 
	I0311 04:27:22.946949    4898 main.go:141] libmachine: STDERR: 
	I0311 04:27:22.947000    4898 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2 +20000M
	I0311 04:27:22.957828    4898 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:22.957846    4898 main.go:141] libmachine: STDERR: 
	I0311 04:27:22.957859    4898 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2
	I0311 04:27:22.957865    4898 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:22.957898    4898 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a3:9c:13:02:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/calico-896000/disk.qcow2
	I0311 04:27:22.959603    4898 main.go:141] libmachine: STDOUT: 
	I0311 04:27:22.959620    4898 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:22.959633    4898 client.go:171] duration metric: took 288.334167ms to LocalClient.Create
	I0311 04:27:24.961767    4898 start.go:128] duration metric: took 2.358596833s to createHost
	I0311 04:27:24.961820    4898 start.go:83] releasing machines lock for "calico-896000", held for 2.3590765s
	W0311 04:27:24.962187    4898 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:24.976030    4898 out.go:177] 
	W0311 04:27:24.979933    4898 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:27:24.979967    4898 out.go:239] * 
	* 
	W0311 04:27:24.982654    4898 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:27:24.989802    4898 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.91830275s)

                                                
                                                
-- stdout --
	* [custom-flannel-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-896000" primary control-plane node in "custom-flannel-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:27:27.538877    5019 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:27:27.539009    5019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:27.539013    5019 out.go:304] Setting ErrFile to fd 2...
	I0311 04:27:27.539015    5019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:27.539152    5019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:27:27.540266    5019 out.go:298] Setting JSON to false
	I0311 04:27:27.556479    5019 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3419,"bootTime":1710153028,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:27:27.556541    5019 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:27:27.563432    5019 out.go:177] * [custom-flannel-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:27:27.570311    5019 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:27:27.574343    5019 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:27:27.570352    5019 notify.go:220] Checking for updates...
	I0311 04:27:27.575768    5019 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:27:27.579252    5019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:27:27.582297    5019 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:27:27.585342    5019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:27:27.588591    5019 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:27.588656    5019 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:27.588710    5019 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:27:27.593288    5019 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:27:27.600301    5019 start.go:297] selected driver: qemu2
	I0311 04:27:27.600308    5019 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:27:27.600321    5019 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:27:27.602594    5019 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:27:27.606260    5019 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:27:27.609391    5019 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:27:27.609441    5019 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0311 04:27:27.609451    5019 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0311 04:27:27.609491    5019 start.go:340] cluster config:
	{Name:custom-flannel-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:27:27.613900    5019 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:27:27.617301    5019 out.go:177] * Starting "custom-flannel-896000" primary control-plane node in "custom-flannel-896000" cluster
	I0311 04:27:27.624323    5019 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:27:27.624338    5019 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:27:27.624350    5019 cache.go:56] Caching tarball of preloaded images
	I0311 04:27:27.624421    5019 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:27:27.624428    5019 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:27:27.624497    5019 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/custom-flannel-896000/config.json ...
	I0311 04:27:27.624509    5019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/custom-flannel-896000/config.json: {Name:mk550d3ea11d2fb6ee82845d442879e9d15edeac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:27:27.624718    5019 start.go:360] acquireMachinesLock for custom-flannel-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:27.624752    5019 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "custom-flannel-896000"
	I0311 04:27:27.624763    5019 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:27.624793    5019 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:27.632098    5019 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:27.649643    5019 start.go:159] libmachine.API.Create for "custom-flannel-896000" (driver="qemu2")
	I0311 04:27:27.649674    5019 client.go:168] LocalClient.Create starting
	I0311 04:27:27.649735    5019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:27.649767    5019 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:27.649777    5019 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:27.649823    5019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:27.649845    5019 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:27.649852    5019 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:27.650280    5019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:27.792897    5019 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:27.907262    5019 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:27.907268    5019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:27.907449    5019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2
	I0311 04:27:27.919899    5019 main.go:141] libmachine: STDOUT: 
	I0311 04:27:27.919917    5019 main.go:141] libmachine: STDERR: 
	I0311 04:27:27.919988    5019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2 +20000M
	I0311 04:27:27.930557    5019 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:27.930574    5019 main.go:141] libmachine: STDERR: 
	I0311 04:27:27.930587    5019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2
	I0311 04:27:27.930592    5019 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:27.930627    5019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:21:f3:47:e5:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2
	I0311 04:27:27.932319    5019 main.go:141] libmachine: STDOUT: 
	I0311 04:27:27.932334    5019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:27.932360    5019 client.go:171] duration metric: took 282.685833ms to LocalClient.Create
	I0311 04:27:29.934526    5019 start.go:128] duration metric: took 2.309757458s to createHost
	I0311 04:27:29.934622    5019 start.go:83] releasing machines lock for "custom-flannel-896000", held for 2.309910833s
	W0311 04:27:29.934674    5019 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:29.951698    5019 out.go:177] * Deleting "custom-flannel-896000" in qemu2 ...
	W0311 04:27:29.976622    5019 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:29.976656    5019 start.go:728] Will try again in 5 seconds ...
	I0311 04:27:34.977131    5019 start.go:360] acquireMachinesLock for custom-flannel-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:34.977674    5019 start.go:364] duration metric: took 438.875µs to acquireMachinesLock for "custom-flannel-896000"
	I0311 04:27:34.977824    5019 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:34.978075    5019 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:34.988734    5019 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:35.040036    5019 start.go:159] libmachine.API.Create for "custom-flannel-896000" (driver="qemu2")
	I0311 04:27:35.040083    5019 client.go:168] LocalClient.Create starting
	I0311 04:27:35.040193    5019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:35.040257    5019 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:35.040272    5019 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:35.040344    5019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:35.040387    5019 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:35.040399    5019 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:35.040884    5019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:35.190249    5019 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:35.351898    5019 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:35.351911    5019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:35.352097    5019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2
	I0311 04:27:35.364592    5019 main.go:141] libmachine: STDOUT: 
	I0311 04:27:35.364607    5019 main.go:141] libmachine: STDERR: 
	I0311 04:27:35.364663    5019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2 +20000M
	I0311 04:27:35.375267    5019 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:35.375282    5019 main.go:141] libmachine: STDERR: 
	I0311 04:27:35.375293    5019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2
	I0311 04:27:35.375298    5019 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:35.375340    5019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:99:60:b9:5f:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/custom-flannel-896000/disk.qcow2
	I0311 04:27:35.377058    5019 main.go:141] libmachine: STDOUT: 
	I0311 04:27:35.377073    5019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:35.377086    5019 client.go:171] duration metric: took 337.005084ms to LocalClient.Create
	I0311 04:27:37.379226    5019 start.go:128] duration metric: took 2.401146833s to createHost
	I0311 04:27:37.379327    5019 start.go:83] releasing machines lock for "custom-flannel-896000", held for 2.401658459s
	W0311 04:27:37.379746    5019 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:37.394251    5019 out.go:177] 
	W0311 04:27:37.398556    5019 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:27:37.398612    5019 out.go:239] * 
	* 
	W0311 04:27:37.401209    5019 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:27:37.413336    5019 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.874901417s)

                                                
                                                
-- stdout --
	* [false-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-896000" primary control-plane node in "false-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:27:39.903841    5144 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:27:39.904045    5144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:39.904048    5144 out.go:304] Setting ErrFile to fd 2...
	I0311 04:27:39.904051    5144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:39.904171    5144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:27:39.905205    5144 out.go:298] Setting JSON to false
	I0311 04:27:39.921346    5144 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3431,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:27:39.921414    5144 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:27:39.927276    5144 out.go:177] * [false-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:27:39.934137    5144 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:27:39.934174    5144 notify.go:220] Checking for updates...
	I0311 04:27:39.941094    5144 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:27:39.948128    5144 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:27:39.955164    5144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:27:39.959224    5144 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:27:39.962162    5144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:27:39.965492    5144 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:39.965562    5144 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:39.965613    5144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:27:39.970216    5144 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:27:39.977115    5144 start.go:297] selected driver: qemu2
	I0311 04:27:39.977123    5144 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:27:39.977129    5144 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:27:39.979600    5144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:27:39.983165    5144 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:27:39.986181    5144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:27:39.986239    5144 cni.go:84] Creating CNI manager for "false"
	I0311 04:27:39.986269    5144 start.go:340] cluster config:
	{Name:false-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:27:39.991339    5144 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:27:39.994181    5144 out.go:177] * Starting "false-896000" primary control-plane node in "false-896000" cluster
	I0311 04:27:40.002235    5144 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:27:40.002251    5144 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:27:40.002262    5144 cache.go:56] Caching tarball of preloaded images
	I0311 04:27:40.002360    5144 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:27:40.002366    5144 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:27:40.002432    5144 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/false-896000/config.json ...
	I0311 04:27:40.002445    5144 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/false-896000/config.json: {Name:mk1c727f35914d8d513fb68c7449aa3ebbade21b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:27:40.002692    5144 start.go:360] acquireMachinesLock for false-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:40.002729    5144 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "false-896000"
	I0311 04:27:40.002744    5144 start.go:93] Provisioning new machine with config: &{Name:false-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:40.002781    5144 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:40.006068    5144 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:40.024602    5144 start.go:159] libmachine.API.Create for "false-896000" (driver="qemu2")
	I0311 04:27:40.024640    5144 client.go:168] LocalClient.Create starting
	I0311 04:27:40.024715    5144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:40.024751    5144 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:40.024761    5144 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:40.024809    5144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:40.024841    5144 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:40.024848    5144 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:40.025263    5144 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:40.167148    5144 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:40.232412    5144 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:40.232417    5144 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:40.232593    5144 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2
	I0311 04:27:40.244853    5144 main.go:141] libmachine: STDOUT: 
	I0311 04:27:40.244880    5144 main.go:141] libmachine: STDERR: 
	I0311 04:27:40.244930    5144 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2 +20000M
	I0311 04:27:40.255980    5144 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:40.255998    5144 main.go:141] libmachine: STDERR: 
	I0311 04:27:40.256015    5144 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2
	I0311 04:27:40.256020    5144 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:40.256046    5144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:d1:81:7c:d2:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2
	I0311 04:27:40.257982    5144 main.go:141] libmachine: STDOUT: 
	I0311 04:27:40.257998    5144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:40.258023    5144 client.go:171] duration metric: took 233.381917ms to LocalClient.Create
	I0311 04:27:42.260239    5144 start.go:128] duration metric: took 2.257466792s to createHost
	I0311 04:27:42.260314    5144 start.go:83] releasing machines lock for "false-896000", held for 2.257622917s
	W0311 04:27:42.260361    5144 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:42.277546    5144 out.go:177] * Deleting "false-896000" in qemu2 ...
	W0311 04:27:42.302133    5144 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:42.302164    5144 start.go:728] Will try again in 5 seconds ...
	I0311 04:27:47.304299    5144 start.go:360] acquireMachinesLock for false-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:47.304833    5144 start.go:364] duration metric: took 427.291µs to acquireMachinesLock for "false-896000"
	I0311 04:27:47.304964    5144 start.go:93] Provisioning new machine with config: &{Name:false-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:47.305190    5144 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:47.315802    5144 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:47.366185    5144 start.go:159] libmachine.API.Create for "false-896000" (driver="qemu2")
	I0311 04:27:47.366260    5144 client.go:168] LocalClient.Create starting
	I0311 04:27:47.366419    5144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:47.366493    5144 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:47.366513    5144 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:47.366575    5144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:47.366617    5144 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:47.366629    5144 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:47.367154    5144 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:47.518984    5144 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:47.676601    5144 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:47.676613    5144 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:47.676803    5144 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2
	I0311 04:27:47.689731    5144 main.go:141] libmachine: STDOUT: 
	I0311 04:27:47.689761    5144 main.go:141] libmachine: STDERR: 
	I0311 04:27:47.689828    5144 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2 +20000M
	I0311 04:27:47.700692    5144 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:47.700714    5144 main.go:141] libmachine: STDERR: 
	I0311 04:27:47.700734    5144 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2
	I0311 04:27:47.700739    5144 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:47.700778    5144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:1d:7c:20:82:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/false-896000/disk.qcow2
	I0311 04:27:47.702549    5144 main.go:141] libmachine: STDOUT: 
	I0311 04:27:47.702563    5144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:47.702575    5144 client.go:171] duration metric: took 336.30675ms to LocalClient.Create
	I0311 04:27:49.704706    5144 start.go:128] duration metric: took 2.399516167s to createHost
	I0311 04:27:49.704764    5144 start.go:83] releasing machines lock for "false-896000", held for 2.399953417s
	W0311 04:27:49.705121    5144 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:49.714769    5144 out.go:177] 
	W0311 04:27:49.720767    5144 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:27:49.720813    5144 out.go:239] * 
	* 
	W0311 04:27:49.723752    5144 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:27:49.732696    5144 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.849428791s)

                                                
                                                
-- stdout --
	* [enable-default-cni-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-896000" primary control-plane node in "enable-default-cni-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:27:52.066386    5258 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:27:52.066524    5258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:52.066527    5258 out.go:304] Setting ErrFile to fd 2...
	I0311 04:27:52.066530    5258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:27:52.066658    5258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:27:52.067687    5258 out.go:298] Setting JSON to false
	I0311 04:27:52.083936    5258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3444,"bootTime":1710153028,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:27:52.083995    5258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:27:52.089844    5258 out.go:177] * [enable-default-cni-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:27:52.092861    5258 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:27:52.096859    5258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:27:52.092957    5258 notify.go:220] Checking for updates...
	I0311 04:27:52.100882    5258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:27:52.103891    5258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:27:52.106798    5258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:27:52.109916    5258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:27:52.113081    5258 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:52.113150    5258 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:27:52.113203    5258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:27:52.117749    5258 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:27:52.124753    5258 start.go:297] selected driver: qemu2
	I0311 04:27:52.124760    5258 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:27:52.124774    5258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:27:52.127044    5258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:27:52.129802    5258 out.go:177] * Automatically selected the socket_vmnet network
	E0311 04:27:52.132969    5258 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0311 04:27:52.132985    5258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:27:52.133035    5258 cni.go:84] Creating CNI manager for "bridge"
	I0311 04:27:52.133048    5258 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:27:52.133094    5258 start.go:340] cluster config:
	{Name:enable-default-cni-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:27:52.137547    5258 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:27:52.144815    5258 out.go:177] * Starting "enable-default-cni-896000" primary control-plane node in "enable-default-cni-896000" cluster
	I0311 04:27:52.148867    5258 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:27:52.148883    5258 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:27:52.148902    5258 cache.go:56] Caching tarball of preloaded images
	I0311 04:27:52.148966    5258 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:27:52.148972    5258 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:27:52.149034    5258 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/enable-default-cni-896000/config.json ...
	I0311 04:27:52.149045    5258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/enable-default-cni-896000/config.json: {Name:mk4a8782bbdeda97611ecc2ff7cb118fc372651e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:27:52.149263    5258 start.go:360] acquireMachinesLock for enable-default-cni-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:52.149299    5258 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "enable-default-cni-896000"
	I0311 04:27:52.149310    5258 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:52.149345    5258 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:52.157674    5258 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:52.175053    5258 start.go:159] libmachine.API.Create for "enable-default-cni-896000" (driver="qemu2")
	I0311 04:27:52.175081    5258 client.go:168] LocalClient.Create starting
	I0311 04:27:52.175145    5258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:52.175174    5258 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:52.175187    5258 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:52.175231    5258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:52.175253    5258 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:52.175258    5258 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:52.175707    5258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:52.315372    5258 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:52.422442    5258 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:52.422449    5258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:52.422633    5258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2
	I0311 04:27:52.434572    5258 main.go:141] libmachine: STDOUT: 
	I0311 04:27:52.434593    5258 main.go:141] libmachine: STDERR: 
	I0311 04:27:52.434651    5258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2 +20000M
	I0311 04:27:52.445228    5258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:52.445253    5258 main.go:141] libmachine: STDERR: 
	I0311 04:27:52.445270    5258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2
	I0311 04:27:52.445274    5258 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:52.445307    5258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d5:6d:23:24:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2
	I0311 04:27:52.447034    5258 main.go:141] libmachine: STDOUT: 
	I0311 04:27:52.447050    5258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:52.447067    5258 client.go:171] duration metric: took 271.988875ms to LocalClient.Create
	I0311 04:27:54.447984    5258 start.go:128] duration metric: took 2.298666709s to createHost
	I0311 04:27:54.448070    5258 start.go:83] releasing machines lock for "enable-default-cni-896000", held for 2.2988105s
	W0311 04:27:54.448114    5258 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:54.455430    5258 out.go:177] * Deleting "enable-default-cni-896000" in qemu2 ...
	W0311 04:27:54.480830    5258 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:27:54.480863    5258 start.go:728] Will try again in 5 seconds ...
	I0311 04:27:59.481926    5258 start.go:360] acquireMachinesLock for enable-default-cni-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:27:59.482374    5258 start.go:364] duration metric: took 360.792µs to acquireMachinesLock for "enable-default-cni-896000"
	I0311 04:27:59.482495    5258 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:27:59.482785    5258 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:27:59.492421    5258 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:27:59.541766    5258 start.go:159] libmachine.API.Create for "enable-default-cni-896000" (driver="qemu2")
	I0311 04:27:59.541823    5258 client.go:168] LocalClient.Create starting
	I0311 04:27:59.541924    5258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:27:59.541980    5258 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:59.542003    5258 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:59.542058    5258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:27:59.542098    5258 main.go:141] libmachine: Decoding PEM data...
	I0311 04:27:59.542110    5258 main.go:141] libmachine: Parsing certificate...
	I0311 04:27:59.542642    5258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:27:59.692993    5258 main.go:141] libmachine: Creating SSH key...
	I0311 04:27:59.814332    5258 main.go:141] libmachine: Creating Disk image...
	I0311 04:27:59.814342    5258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:27:59.814518    5258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2
	I0311 04:27:59.827112    5258 main.go:141] libmachine: STDOUT: 
	I0311 04:27:59.827136    5258 main.go:141] libmachine: STDERR: 
	I0311 04:27:59.827187    5258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2 +20000M
	I0311 04:27:59.837935    5258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:27:59.837951    5258 main.go:141] libmachine: STDERR: 
	I0311 04:27:59.837968    5258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2
	I0311 04:27:59.837973    5258 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:27:59.838013    5258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c2:fc:9c:3a:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/enable-default-cni-896000/disk.qcow2
	I0311 04:27:59.839765    5258 main.go:141] libmachine: STDOUT: 
	I0311 04:27:59.839783    5258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:27:59.839795    5258 client.go:171] duration metric: took 297.971875ms to LocalClient.Create
	I0311 04:28:01.841933    5258 start.go:128] duration metric: took 2.359169375s to createHost
	I0311 04:28:01.842012    5258 start.go:83] releasing machines lock for "enable-default-cni-896000", held for 2.359664167s
	W0311 04:28:01.842412    5258 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:01.851970    5258 out.go:177] 
	W0311 04:28:01.858187    5258 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:28:01.858217    5258 out.go:239] * 
	* 
	W0311 04:28:01.860867    5258 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:28:01.871008    5258 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.820285166s)

                                                
                                                
-- stdout --
	* [flannel-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-896000" primary control-plane node in "flannel-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:28:04.188713    5368 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:28:04.188853    5368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:04.188856    5368 out.go:304] Setting ErrFile to fd 2...
	I0311 04:28:04.188858    5368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:04.188984    5368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:28:04.190066    5368 out.go:298] Setting JSON to false
	I0311 04:28:04.206229    5368 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3456,"bootTime":1710153028,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:28:04.206307    5368 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:28:04.212838    5368 out.go:177] * [flannel-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:28:04.219836    5368 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:28:04.223756    5368 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:28:04.219871    5368 notify.go:220] Checking for updates...
	I0311 04:28:04.229837    5368 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:28:04.232759    5368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:28:04.235804    5368 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:28:04.238800    5368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:28:04.242069    5368 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:04.242147    5368 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:04.242191    5368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:28:04.246792    5368 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:28:04.253765    5368 start.go:297] selected driver: qemu2
	I0311 04:28:04.253770    5368 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:28:04.253775    5368 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:28:04.256080    5368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:28:04.260792    5368 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:28:04.265000    5368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:28:04.265044    5368 cni.go:84] Creating CNI manager for "flannel"
	I0311 04:28:04.265049    5368 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0311 04:28:04.265086    5368 start.go:340] cluster config:
	{Name:flannel-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:28:04.269692    5368 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:04.277818    5368 out.go:177] * Starting "flannel-896000" primary control-plane node in "flannel-896000" cluster
	I0311 04:28:04.281694    5368 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:28:04.281708    5368 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:28:04.281719    5368 cache.go:56] Caching tarball of preloaded images
	I0311 04:28:04.281780    5368 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:28:04.281787    5368 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:28:04.281849    5368 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/flannel-896000/config.json ...
	I0311 04:28:04.281861    5368 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/flannel-896000/config.json: {Name:mk10d98c32b2f3312854b82778f8c4d2fec6d484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:28:04.282098    5368 start.go:360] acquireMachinesLock for flannel-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:04.282132    5368 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "flannel-896000"
	I0311 04:28:04.282144    5368 start.go:93] Provisioning new machine with config: &{Name:flannel-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:28:04.282177    5368 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:28:04.290812    5368 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:28:04.309569    5368 start.go:159] libmachine.API.Create for "flannel-896000" (driver="qemu2")
	I0311 04:28:04.309601    5368 client.go:168] LocalClient.Create starting
	I0311 04:28:04.309684    5368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:28:04.309719    5368 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:04.309732    5368 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:04.309774    5368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:28:04.309806    5368 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:04.309815    5368 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:04.310207    5368 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:28:04.451035    5368 main.go:141] libmachine: Creating SSH key...
	I0311 04:28:04.527287    5368 main.go:141] libmachine: Creating Disk image...
	I0311 04:28:04.527292    5368 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:28:04.527481    5368 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2
	I0311 04:28:04.540262    5368 main.go:141] libmachine: STDOUT: 
	I0311 04:28:04.540279    5368 main.go:141] libmachine: STDERR: 
	I0311 04:28:04.540337    5368 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2 +20000M
	I0311 04:28:04.551269    5368 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:28:04.551290    5368 main.go:141] libmachine: STDERR: 
	I0311 04:28:04.551307    5368 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2
	I0311 04:28:04.551312    5368 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:28:04.551338    5368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b0:cf:40:46:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2
	I0311 04:28:04.553181    5368 main.go:141] libmachine: STDOUT: 
	I0311 04:28:04.553196    5368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:04.553214    5368 client.go:171] duration metric: took 243.612041ms to LocalClient.Create
	I0311 04:28:06.555428    5368 start.go:128] duration metric: took 2.273263958s to createHost
	I0311 04:28:06.555512    5368 start.go:83] releasing machines lock for "flannel-896000", held for 2.273418208s
	W0311 04:28:06.555565    5368 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:06.571554    5368 out.go:177] * Deleting "flannel-896000" in qemu2 ...
	W0311 04:28:06.595883    5368 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:06.595918    5368 start.go:728] Will try again in 5 seconds ...
	I0311 04:28:11.598062    5368 start.go:360] acquireMachinesLock for flannel-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:11.598467    5368 start.go:364] duration metric: took 293.417µs to acquireMachinesLock for "flannel-896000"
	I0311 04:28:11.598577    5368 start.go:93] Provisioning new machine with config: &{Name:flannel-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:28:11.598842    5368 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:28:11.608403    5368 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:28:11.659748    5368 start.go:159] libmachine.API.Create for "flannel-896000" (driver="qemu2")
	I0311 04:28:11.659797    5368 client.go:168] LocalClient.Create starting
	I0311 04:28:11.659914    5368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:28:11.659976    5368 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:11.659998    5368 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:11.660065    5368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:28:11.660111    5368 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:11.660125    5368 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:11.660659    5368 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:28:11.812793    5368 main.go:141] libmachine: Creating SSH key...
	I0311 04:28:11.912089    5368 main.go:141] libmachine: Creating Disk image...
	I0311 04:28:11.912100    5368 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:28:11.912281    5368 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2
	I0311 04:28:11.924478    5368 main.go:141] libmachine: STDOUT: 
	I0311 04:28:11.924499    5368 main.go:141] libmachine: STDERR: 
	I0311 04:28:11.924554    5368 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2 +20000M
	I0311 04:28:11.935178    5368 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:28:11.935206    5368 main.go:141] libmachine: STDERR: 
	I0311 04:28:11.935221    5368 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2
	I0311 04:28:11.935227    5368 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:28:11.935267    5368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:09:fd:da:6d:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/flannel-896000/disk.qcow2
	I0311 04:28:11.936984    5368 main.go:141] libmachine: STDOUT: 
	I0311 04:28:11.936998    5368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:11.937012    5368 client.go:171] duration metric: took 277.216167ms to LocalClient.Create
	I0311 04:28:13.939153    5368 start.go:128] duration metric: took 2.340335041s to createHost
	I0311 04:28:13.939216    5368 start.go:83] releasing machines lock for "flannel-896000", held for 2.34077725s
	W0311 04:28:13.939634    5368 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:13.948427    5368 out.go:177] 
	W0311 04:28:13.952368    5368 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:28:13.952399    5368 out.go:239] * 
	* 
	W0311 04:28:13.954926    5368 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:28:13.964472    5368 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.905636209s)

                                                
                                                
-- stdout --
	* [bridge-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-896000" primary control-plane node in "bridge-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:28:16.476253    5486 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:28:16.476377    5486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:16.476382    5486 out.go:304] Setting ErrFile to fd 2...
	I0311 04:28:16.476385    5486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:16.476525    5486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:28:16.477581    5486 out.go:298] Setting JSON to false
	I0311 04:28:16.493751    5486 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3468,"bootTime":1710153028,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:28:16.493834    5486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:28:16.498831    5486 out.go:177] * [bridge-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:28:16.505713    5486 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:28:16.505773    5486 notify.go:220] Checking for updates...
	I0311 04:28:16.512696    5486 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:28:16.515700    5486 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:28:16.518772    5486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:28:16.521703    5486 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:28:16.524673    5486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:28:16.528100    5486 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:16.528164    5486 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:16.528221    5486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:28:16.532575    5486 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:28:16.539666    5486 start.go:297] selected driver: qemu2
	I0311 04:28:16.539672    5486 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:28:16.539684    5486 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:28:16.541905    5486 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:28:16.545605    5486 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:28:16.548806    5486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:28:16.548842    5486 cni.go:84] Creating CNI manager for "bridge"
	I0311 04:28:16.548852    5486 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:28:16.548881    5486 start.go:340] cluster config:
	{Name:bridge-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:28:16.553525    5486 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:16.561649    5486 out.go:177] * Starting "bridge-896000" primary control-plane node in "bridge-896000" cluster
	I0311 04:28:16.565690    5486 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:28:16.565704    5486 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:28:16.565714    5486 cache.go:56] Caching tarball of preloaded images
	I0311 04:28:16.565768    5486 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:28:16.565774    5486 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:28:16.565842    5486 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/bridge-896000/config.json ...
	I0311 04:28:16.565854    5486 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/bridge-896000/config.json: {Name:mkc01d358b43fd670e05ed55e99627a30a8755a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:28:16.566213    5486 start.go:360] acquireMachinesLock for bridge-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:16.566246    5486 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "bridge-896000"
	I0311 04:28:16.566257    5486 start.go:93] Provisioning new machine with config: &{Name:bridge-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:28:16.566283    5486 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:28:16.574655    5486 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:28:16.592252    5486 start.go:159] libmachine.API.Create for "bridge-896000" (driver="qemu2")
	I0311 04:28:16.592283    5486 client.go:168] LocalClient.Create starting
	I0311 04:28:16.592341    5486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:28:16.592372    5486 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:16.592384    5486 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:16.592434    5486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:28:16.592457    5486 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:16.592465    5486 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:16.592907    5486 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:28:16.734763    5486 main.go:141] libmachine: Creating SSH key...
	I0311 04:28:16.920016    5486 main.go:141] libmachine: Creating Disk image...
	I0311 04:28:16.920025    5486 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:28:16.920228    5486 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2
	I0311 04:28:16.932620    5486 main.go:141] libmachine: STDOUT: 
	I0311 04:28:16.932637    5486 main.go:141] libmachine: STDERR: 
	I0311 04:28:16.932691    5486 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2 +20000M
	I0311 04:28:16.943268    5486 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:28:16.943305    5486 main.go:141] libmachine: STDERR: 
	I0311 04:28:16.943318    5486 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2
	I0311 04:28:16.943321    5486 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:28:16.943347    5486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:e2:d6:e9:99:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2
	I0311 04:28:16.945049    5486 main.go:141] libmachine: STDOUT: 
	I0311 04:28:16.945063    5486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:16.945081    5486 client.go:171] duration metric: took 352.799125ms to LocalClient.Create
	I0311 04:28:18.947309    5486 start.go:128] duration metric: took 2.381051583s to createHost
	I0311 04:28:18.947381    5486 start.go:83] releasing machines lock for "bridge-896000", held for 2.381173459s
	W0311 04:28:18.947432    5486 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:18.958609    5486 out.go:177] * Deleting "bridge-896000" in qemu2 ...
	W0311 04:28:18.986591    5486 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:18.986627    5486 start.go:728] Will try again in 5 seconds ...
	I0311 04:28:23.988717    5486 start.go:360] acquireMachinesLock for bridge-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:23.989198    5486 start.go:364] duration metric: took 396.792µs to acquireMachinesLock for "bridge-896000"
	I0311 04:28:23.989299    5486 start.go:93] Provisioning new machine with config: &{Name:bridge-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:28:23.989594    5486 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:28:23.997816    5486 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:28:24.046459    5486 start.go:159] libmachine.API.Create for "bridge-896000" (driver="qemu2")
	I0311 04:28:24.046513    5486 client.go:168] LocalClient.Create starting
	I0311 04:28:24.046616    5486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:28:24.046683    5486 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:24.046701    5486 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:24.046771    5486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:28:24.046815    5486 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:24.046825    5486 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:24.047432    5486 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:28:24.200408    5486 main.go:141] libmachine: Creating SSH key...
	I0311 04:28:24.276793    5486 main.go:141] libmachine: Creating Disk image...
	I0311 04:28:24.276798    5486 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:28:24.276962    5486 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2
	I0311 04:28:24.289123    5486 main.go:141] libmachine: STDOUT: 
	I0311 04:28:24.289145    5486 main.go:141] libmachine: STDERR: 
	I0311 04:28:24.289199    5486 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2 +20000M
	I0311 04:28:24.299744    5486 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:28:24.299764    5486 main.go:141] libmachine: STDERR: 
	I0311 04:28:24.299779    5486 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2
	I0311 04:28:24.299783    5486 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:28:24.299810    5486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:5c:01:ad:29:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/bridge-896000/disk.qcow2
	I0311 04:28:24.301537    5486 main.go:141] libmachine: STDOUT: 
	I0311 04:28:24.301555    5486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:24.301566    5486 client.go:171] duration metric: took 255.053916ms to LocalClient.Create
	I0311 04:28:26.303696    5486 start.go:128] duration metric: took 2.314103834s to createHost
	I0311 04:28:26.303750    5486 start.go:83] releasing machines lock for "bridge-896000", held for 2.3145765s
	W0311 04:28:26.304070    5486 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:26.318729    5486 out.go:177] 
	W0311 04:28:26.322764    5486 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:28:26.322793    5486 out.go:239] * 
	* 
	W0311 04:28:26.325708    5486 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:28:26.336687    5486 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-896000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.907925708s)

                                                
                                                
-- stdout --
	* [kubenet-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-896000" primary control-plane node in "kubenet-896000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:28:28.639089    5596 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:28:28.639211    5596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:28.639214    5596 out.go:304] Setting ErrFile to fd 2...
	I0311 04:28:28.639216    5596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:28.639347    5596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:28:28.640435    5596 out.go:298] Setting JSON to false
	I0311 04:28:28.656592    5596 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3480,"bootTime":1710153028,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:28:28.656657    5596 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:28:28.663006    5596 out.go:177] * [kubenet-896000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:28:28.669974    5596 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:28:28.673969    5596 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:28:28.670024    5596 notify.go:220] Checking for updates...
	I0311 04:28:28.679929    5596 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:28:28.683051    5596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:28:28.686005    5596 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:28:28.688922    5596 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:28:28.692381    5596 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:28.692456    5596 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:28.692503    5596 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:28:28.696957    5596 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:28:28.703953    5596 start.go:297] selected driver: qemu2
	I0311 04:28:28.703961    5596 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:28:28.703969    5596 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:28:28.706240    5596 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:28:28.709979    5596 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:28:28.718071    5596 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:28:28.718109    5596 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0311 04:28:28.718131    5596 start.go:340] cluster config:
	{Name:kubenet-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:28:28.722813    5596 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:28.729986    5596 out.go:177] * Starting "kubenet-896000" primary control-plane node in "kubenet-896000" cluster
	I0311 04:28:28.733972    5596 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:28:28.733987    5596 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:28:28.734005    5596 cache.go:56] Caching tarball of preloaded images
	I0311 04:28:28.734074    5596 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:28:28.734081    5596 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:28:28.734147    5596 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/kubenet-896000/config.json ...
	I0311 04:28:28.734161    5596 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/kubenet-896000/config.json: {Name:mk362a8c3eb2a84d4af740c3960f7194109c7072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:28:28.734404    5596 start.go:360] acquireMachinesLock for kubenet-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:28.734443    5596 start.go:364] duration metric: took 31.791µs to acquireMachinesLock for "kubenet-896000"
	I0311 04:28:28.734456    5596 start.go:93] Provisioning new machine with config: &{Name:kubenet-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:28:28.734492    5596 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:28:28.741940    5596 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:28:28.760300    5596 start.go:159] libmachine.API.Create for "kubenet-896000" (driver="qemu2")
	I0311 04:28:28.760331    5596 client.go:168] LocalClient.Create starting
	I0311 04:28:28.760401    5596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:28:28.760435    5596 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:28.760445    5596 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:28.760495    5596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:28:28.760520    5596 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:28.760526    5596 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:28.760904    5596 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:28:28.901405    5596 main.go:141] libmachine: Creating SSH key...
	I0311 04:28:29.030836    5596 main.go:141] libmachine: Creating Disk image...
	I0311 04:28:29.030842    5596 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:28:29.031012    5596 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2
	I0311 04:28:29.043319    5596 main.go:141] libmachine: STDOUT: 
	I0311 04:28:29.043342    5596 main.go:141] libmachine: STDERR: 
	I0311 04:28:29.043390    5596 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2 +20000M
	I0311 04:28:29.053913    5596 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:28:29.053930    5596 main.go:141] libmachine: STDERR: 
	I0311 04:28:29.053946    5596 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2
	I0311 04:28:29.053953    5596 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:28:29.053985    5596 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b9:32:ef:02:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2
	I0311 04:28:29.055733    5596 main.go:141] libmachine: STDOUT: 
	I0311 04:28:29.055750    5596 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:29.055768    5596 client.go:171] duration metric: took 295.43725ms to LocalClient.Create
	I0311 04:28:31.057914    5596 start.go:128] duration metric: took 2.323450542s to createHost
	I0311 04:28:31.057986    5596 start.go:83] releasing machines lock for "kubenet-896000", held for 2.323584375s
	W0311 04:28:31.058029    5596 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:31.069389    5596 out.go:177] * Deleting "kubenet-896000" in qemu2 ...
	W0311 04:28:31.096047    5596 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:31.096095    5596 start.go:728] Will try again in 5 seconds ...
	I0311 04:28:36.097806    5596 start.go:360] acquireMachinesLock for kubenet-896000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:36.098233    5596 start.go:364] duration metric: took 318.916µs to acquireMachinesLock for "kubenet-896000"
	I0311 04:28:36.098404    5596 start.go:93] Provisioning new machine with config: &{Name:kubenet-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:28:36.098679    5596 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:28:36.108384    5596 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 04:28:36.158393    5596 start.go:159] libmachine.API.Create for "kubenet-896000" (driver="qemu2")
	I0311 04:28:36.158470    5596 client.go:168] LocalClient.Create starting
	I0311 04:28:36.158604    5596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:28:36.158666    5596 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:36.158684    5596 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:36.158749    5596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:28:36.158792    5596 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:36.158810    5596 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:36.159360    5596 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:28:36.313368    5596 main.go:141] libmachine: Creating SSH key...
	I0311 04:28:36.443629    5596 main.go:141] libmachine: Creating Disk image...
	I0311 04:28:36.443634    5596 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:28:36.443811    5596 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2
	I0311 04:28:36.456314    5596 main.go:141] libmachine: STDOUT: 
	I0311 04:28:36.456420    5596 main.go:141] libmachine: STDERR: 
	I0311 04:28:36.456475    5596 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2 +20000M
	I0311 04:28:36.467267    5596 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:28:36.467331    5596 main.go:141] libmachine: STDERR: 
	I0311 04:28:36.467344    5596 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2
	I0311 04:28:36.467349    5596 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:28:36.467375    5596 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:a8:29:22:36:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/kubenet-896000/disk.qcow2
	I0311 04:28:36.469102    5596 main.go:141] libmachine: STDOUT: 
	I0311 04:28:36.469157    5596 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:36.469168    5596 client.go:171] duration metric: took 310.68925ms to LocalClient.Create
	I0311 04:28:38.471304    5596 start.go:128] duration metric: took 2.372645833s to createHost
	I0311 04:28:38.471399    5596 start.go:83] releasing machines lock for "kubenet-896000", held for 2.373169334s
	W0311 04:28:38.471724    5596 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:38.481362    5596 out.go:177] 
	W0311 04:28:38.487397    5596 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:28:38.487634    5596 out.go:239] * 
	* 
	W0311 04:28:38.490443    5596 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:28:38.501402    5596 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.846737375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-749000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-749000" primary control-plane node in "old-k8s-version-749000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-749000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:28:40.801911    5714 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:28:40.802038    5714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:40.802042    5714 out.go:304] Setting ErrFile to fd 2...
	I0311 04:28:40.802044    5714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:40.802164    5714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:28:40.803233    5714 out.go:298] Setting JSON to false
	I0311 04:28:40.819320    5714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3492,"bootTime":1710153028,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:28:40.819388    5714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:28:40.825851    5714 out.go:177] * [old-k8s-version-749000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:28:40.831790    5714 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:28:40.835818    5714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:28:40.831847    5714 notify.go:220] Checking for updates...
	I0311 04:28:40.838797    5714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:28:40.841767    5714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:28:40.845817    5714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:28:40.848803    5714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:28:40.852227    5714 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:40.852297    5714 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:40.852348    5714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:28:40.856815    5714 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:28:40.863753    5714 start.go:297] selected driver: qemu2
	I0311 04:28:40.863760    5714 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:28:40.863765    5714 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:28:40.865951    5714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:28:40.868844    5714 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:28:40.871846    5714 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:28:40.871878    5714 cni.go:84] Creating CNI manager for ""
	I0311 04:28:40.871885    5714 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 04:28:40.871920    5714 start.go:340] cluster config:
	{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:28:40.876551    5714 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:40.883764    5714 out.go:177] * Starting "old-k8s-version-749000" primary control-plane node in "old-k8s-version-749000" cluster
	I0311 04:28:40.886811    5714 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 04:28:40.886827    5714 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 04:28:40.886840    5714 cache.go:56] Caching tarball of preloaded images
	I0311 04:28:40.886906    5714 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:28:40.886915    5714 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 04:28:40.886981    5714 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/old-k8s-version-749000/config.json ...
	I0311 04:28:40.886993    5714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/old-k8s-version-749000/config.json: {Name:mkda819f3fded082a87a6117c8ef816e4f77557a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:28:40.887292    5714 start.go:360] acquireMachinesLock for old-k8s-version-749000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:40.887329    5714 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "old-k8s-version-749000"
	I0311 04:28:40.887339    5714 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:28:40.887382    5714 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:28:40.891780    5714 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:28:40.908855    5714 start.go:159] libmachine.API.Create for "old-k8s-version-749000" (driver="qemu2")
	I0311 04:28:40.908882    5714 client.go:168] LocalClient.Create starting
	I0311 04:28:40.908942    5714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:28:40.908973    5714 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:40.908983    5714 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:40.909027    5714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:28:40.909049    5714 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:40.909057    5714 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:40.909389    5714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:28:41.050791    5714 main.go:141] libmachine: Creating SSH key...
	I0311 04:28:41.215969    5714 main.go:141] libmachine: Creating Disk image...
	I0311 04:28:41.215979    5714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:28:41.216164    5714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0311 04:28:41.228605    5714 main.go:141] libmachine: STDOUT: 
	I0311 04:28:41.228627    5714 main.go:141] libmachine: STDERR: 
	I0311 04:28:41.228672    5714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2 +20000M
	I0311 04:28:41.239239    5714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:28:41.239255    5714 main.go:141] libmachine: STDERR: 
	I0311 04:28:41.239275    5714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0311 04:28:41.239280    5714 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:28:41.239307    5714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:90:5e:e7:b5:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0311 04:28:41.241047    5714 main.go:141] libmachine: STDOUT: 
	I0311 04:28:41.241061    5714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:41.241079    5714 client.go:171] duration metric: took 332.199375ms to LocalClient.Create
	I0311 04:28:43.242942    5714 start.go:128] duration metric: took 2.355592584s to createHost
	I0311 04:28:43.243006    5714 start.go:83] releasing machines lock for "old-k8s-version-749000", held for 2.35571975s
	W0311 04:28:43.243052    5714 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:43.250226    5714 out.go:177] * Deleting "old-k8s-version-749000" in qemu2 ...
	W0311 04:28:43.280040    5714 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:43.280070    5714 start.go:728] Will try again in 5 seconds ...
	I0311 04:28:48.282222    5714 start.go:360] acquireMachinesLock for old-k8s-version-749000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:48.282639    5714 start.go:364] duration metric: took 320.458µs to acquireMachinesLock for "old-k8s-version-749000"
	I0311 04:28:48.282748    5714 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:28:48.283037    5714 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:28:48.292636    5714 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:28:48.340921    5714 start.go:159] libmachine.API.Create for "old-k8s-version-749000" (driver="qemu2")
	I0311 04:28:48.340988    5714 client.go:168] LocalClient.Create starting
	I0311 04:28:48.341136    5714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:28:48.341206    5714 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:48.341223    5714 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:48.341300    5714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:28:48.341344    5714 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:48.341353    5714 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:48.341910    5714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:28:48.492831    5714 main.go:141] libmachine: Creating SSH key...
	I0311 04:28:48.540183    5714 main.go:141] libmachine: Creating Disk image...
	I0311 04:28:48.540192    5714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:28:48.540364    5714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0311 04:28:48.552584    5714 main.go:141] libmachine: STDOUT: 
	I0311 04:28:48.552605    5714 main.go:141] libmachine: STDERR: 
	I0311 04:28:48.552667    5714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2 +20000M
	I0311 04:28:48.563278    5714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:28:48.563303    5714 main.go:141] libmachine: STDERR: 
	I0311 04:28:48.563316    5714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0311 04:28:48.563322    5714 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:28:48.563375    5714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:36:b4:ec:cc:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0311 04:28:48.565059    5714 main.go:141] libmachine: STDOUT: 
	I0311 04:28:48.565083    5714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:48.565099    5714 client.go:171] duration metric: took 224.096042ms to LocalClient.Create
	I0311 04:28:50.567240    5714 start.go:128] duration metric: took 2.284203708s to createHost
	I0311 04:28:50.567317    5714 start.go:83] releasing machines lock for "old-k8s-version-749000", held for 2.284704166s
	W0311 04:28:50.567693    5714 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-749000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-749000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:50.582400    5714 out.go:177] 
	W0311 04:28:50.585430    5714 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:28:50.585471    5714 out.go:239] * 
	* 
	W0311 04:28:50.588055    5714 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:28:50.603121    5714 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (69.911875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-749000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-749000 create -f testdata/busybox.yaml: exit status 1 (29.519875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-749000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-749000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (32.073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (31.90725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-749000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-749000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-749000 describe deploy/metrics-server -n kube-system: exit status 1 (26.891291ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-749000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-749000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (32.50725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.196669459s)

                                                
                                                
-- stdout --
	* [old-k8s-version-749000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-749000" primary control-plane node in "old-k8s-version-749000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-749000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-749000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:28:53.166254    5758 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:28:53.166373    5758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:53.166376    5758 out.go:304] Setting ErrFile to fd 2...
	I0311 04:28:53.166378    5758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:53.166508    5758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:28:53.167461    5758 out.go:298] Setting JSON to false
	I0311 04:28:53.183675    5758 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3505,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:28:53.183737    5758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:28:53.187369    5758 out.go:177] * [old-k8s-version-749000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:28:53.194376    5758 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:28:53.194428    5758 notify.go:220] Checking for updates...
	I0311 04:28:53.202353    5758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:28:53.205410    5758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:28:53.208324    5758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:28:53.211363    5758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:28:53.214370    5758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:28:53.217557    5758 config.go:182] Loaded profile config "old-k8s-version-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0311 04:28:53.221288    5758 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 04:28:53.224389    5758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:28:53.227327    5758 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:28:53.234339    5758 start.go:297] selected driver: qemu2
	I0311 04:28:53.234346    5758 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:28:53.234423    5758 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:28:53.236703    5758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:28:53.236751    5758 cni.go:84] Creating CNI manager for ""
	I0311 04:28:53.236758    5758 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 04:28:53.236791    5758 start.go:340] cluster config:
	{Name:old-k8s-version-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-749000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:28:53.241242    5758 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:53.249322    5758 out.go:177] * Starting "old-k8s-version-749000" primary control-plane node in "old-k8s-version-749000" cluster
	I0311 04:28:53.252248    5758 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 04:28:53.252263    5758 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 04:28:53.252276    5758 cache.go:56] Caching tarball of preloaded images
	I0311 04:28:53.252345    5758 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:28:53.252351    5758 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 04:28:53.252419    5758 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/old-k8s-version-749000/config.json ...
	I0311 04:28:53.252855    5758 start.go:360] acquireMachinesLock for old-k8s-version-749000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:53.252881    5758 start.go:364] duration metric: took 19.959µs to acquireMachinesLock for "old-k8s-version-749000"
	I0311 04:28:53.252889    5758 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:28:53.252894    5758 fix.go:54] fixHost starting: 
	I0311 04:28:53.253018    5758 fix.go:112] recreateIfNeeded on old-k8s-version-749000: state=Stopped err=<nil>
	W0311 04:28:53.253026    5758 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:28:53.257359    5758 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-749000" ...
	I0311 04:28:53.264327    5758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:36:b4:ec:cc:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0311 04:28:53.266343    5758 main.go:141] libmachine: STDOUT: 
	I0311 04:28:53.266367    5758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:53.266402    5758 fix.go:56] duration metric: took 13.508916ms for fixHost
	I0311 04:28:53.266407    5758 start.go:83] releasing machines lock for "old-k8s-version-749000", held for 13.522292ms
	W0311 04:28:53.266414    5758 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:28:53.266444    5758 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:53.266449    5758 start.go:728] Will try again in 5 seconds ...
	I0311 04:28:58.266640    5758 start.go:360] acquireMachinesLock for old-k8s-version-749000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:58.267050    5758 start.go:364] duration metric: took 298.792µs to acquireMachinesLock for "old-k8s-version-749000"
	I0311 04:28:58.267197    5758 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:28:58.267216    5758 fix.go:54] fixHost starting: 
	I0311 04:28:58.267881    5758 fix.go:112] recreateIfNeeded on old-k8s-version-749000: state=Stopped err=<nil>
	W0311 04:28:58.267909    5758 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:28:58.278244    5758 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-749000" ...
	I0311 04:28:58.281422    5758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:36:b4:ec:cc:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/old-k8s-version-749000/disk.qcow2
	I0311 04:28:58.291352    5758 main.go:141] libmachine: STDOUT: 
	I0311 04:28:58.291426    5758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:58.291493    5758 fix.go:56] duration metric: took 24.275083ms for fixHost
	I0311 04:28:58.291506    5758 start.go:83] releasing machines lock for "old-k8s-version-749000", held for 24.409333ms
	W0311 04:28:58.291662    5758 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:28:58.300250    5758 out.go:177] 
	W0311 04:28:58.304350    5758 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:28:58.304402    5758 out.go:239] * 
	* 
	W0311 04:28:58.306818    5758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:28:58.317312    5758 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-749000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (70.080291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-749000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (33.717292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-749000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-749000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-749000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.629ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-749000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-749000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (32.139458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-749000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (32.159667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-749000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-749000 --alsologtostderr -v=1: exit status 83 (44.52825ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-749000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:28:58.603623    5780 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:28:58.604041    5780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:58.604045    5780 out.go:304] Setting ErrFile to fd 2...
	I0311 04:28:58.604047    5780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:58.604195    5780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:28:58.604393    5780 out.go:298] Setting JSON to false
	I0311 04:28:58.604401    5780 mustload.go:65] Loading cluster: old-k8s-version-749000
	I0311 04:28:58.604597    5780 config.go:182] Loaded profile config "old-k8s-version-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0311 04:28:58.609175    5780 out.go:177] * The control-plane node old-k8s-version-749000 host is not running: state=Stopped
	I0311 04:28:58.613145    5780 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-749000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-749000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (31.779042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (31.92775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-114000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-114000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.747953958s)

                                                
                                                
-- stdout --
	* [no-preload-114000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-114000" primary control-plane node in "no-preload-114000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-114000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:28:59.085100    5803 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:28:59.085224    5803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:59.085227    5803 out.go:304] Setting ErrFile to fd 2...
	I0311 04:28:59.085229    5803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:28:59.085357    5803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:28:59.086414    5803 out.go:298] Setting JSON to false
	I0311 04:28:59.102821    5803 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3511,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:28:59.102889    5803 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:28:59.108215    5803 out.go:177] * [no-preload-114000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:28:59.115131    5803 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:28:59.119163    5803 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:28:59.115183    5803 notify.go:220] Checking for updates...
	I0311 04:28:59.125065    5803 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:28:59.128152    5803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:28:59.131078    5803 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:28:59.134130    5803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:28:59.137465    5803 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:59.137525    5803 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:28:59.137579    5803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:28:59.142101    5803 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:28:59.149149    5803 start.go:297] selected driver: qemu2
	I0311 04:28:59.149158    5803 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:28:59.149164    5803 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:28:59.151374    5803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:28:59.154095    5803 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:28:59.157219    5803 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:28:59.157266    5803 cni.go:84] Creating CNI manager for ""
	I0311 04:28:59.157274    5803 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:28:59.157279    5803 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:28:59.157310    5803 start.go:340] cluster config:
	{Name:no-preload-114000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-114000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:28:59.161795    5803 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:59.169178    5803 out.go:177] * Starting "no-preload-114000" primary control-plane node in "no-preload-114000" cluster
	I0311 04:28:59.173088    5803 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 04:28:59.173183    5803 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/no-preload-114000/config.json ...
	I0311 04:28:59.173204    5803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/no-preload-114000/config.json: {Name:mk898e18ceaf850c66ab4aadf2779e7d2e6753ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:28:59.173223    5803 cache.go:107] acquiring lock: {Name:mk2f4032ff1030d1bcd8a6e7b64d0f5de14c576d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:59.173239    5803 cache.go:107] acquiring lock: {Name:mka2856f7ed8f9639ed63bbc57edb9908f68e759 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:59.173260    5803 cache.go:107] acquiring lock: {Name:mk98e530b5450d61c7157f208832e85464162a73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:59.173293    5803 cache.go:115] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0311 04:28:59.173305    5803 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.125µs
	I0311 04:28:59.173233    5803 cache.go:107] acquiring lock: {Name:mk9fc2a450dcdb6f014aa1cfb439c5555f2669ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:59.173325    5803 cache.go:107] acquiring lock: {Name:mk78843ebb02b57c1501ad3277ed1e0b0eb8af43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:59.173312    5803 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0311 04:28:59.173413    5803 cache.go:107] acquiring lock: {Name:mkff40b7bcd2fd4c59cfe6f4cc460bfa2f0d102d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:59.173439    5803 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 04:28:59.173478    5803 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0311 04:28:59.173487    5803 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0311 04:28:59.173457    5803 start.go:360] acquireMachinesLock for no-preload-114000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:28:59.173527    5803 cache.go:107] acquiring lock: {Name:mk4cb1d26300f8018a1898e85b7a4813ab9e0c08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:59.173535    5803 cache.go:107] acquiring lock: {Name:mkf39303b149c2088360fb3511a469232267b577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:28:59.173581    5803 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 04:28:59.173651    5803 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 04:28:59.173690    5803 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 04:28:59.173693    5803 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 04:28:59.173742    5803 start.go:364] duration metric: took 225µs to acquireMachinesLock for "no-preload-114000"
	I0311 04:28:59.173754    5803 start.go:93] Provisioning new machine with config: &{Name:no-preload-114000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-114000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:28:59.173784    5803 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:28:59.182084    5803 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:28:59.186411    5803 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 04:28:59.189283    5803 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0311 04:28:59.189379    5803 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 04:28:59.189438    5803 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 04:28:59.192199    5803 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 04:28:59.192338    5803 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 04:28:59.192341    5803 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0311 04:28:59.200687    5803 start.go:159] libmachine.API.Create for "no-preload-114000" (driver="qemu2")
	I0311 04:28:59.200707    5803 client.go:168] LocalClient.Create starting
	I0311 04:28:59.200774    5803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:28:59.200802    5803 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:59.200813    5803 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:59.200863    5803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:28:59.200887    5803 main.go:141] libmachine: Decoding PEM data...
	I0311 04:28:59.200896    5803 main.go:141] libmachine: Parsing certificate...
	I0311 04:28:59.201234    5803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:28:59.345479    5803 main.go:141] libmachine: Creating SSH key...
	I0311 04:28:59.398761    5803 main.go:141] libmachine: Creating Disk image...
	I0311 04:28:59.398777    5803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:28:59.398948    5803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2
	I0311 04:28:59.411479    5803 main.go:141] libmachine: STDOUT: 
	I0311 04:28:59.411502    5803 main.go:141] libmachine: STDERR: 
	I0311 04:28:59.411570    5803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2 +20000M
	I0311 04:28:59.423804    5803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:28:59.423830    5803 main.go:141] libmachine: STDERR: 
	I0311 04:28:59.423849    5803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2
	I0311 04:28:59.423856    5803 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:28:59.423893    5803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:b5:de:c8:05:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2
	I0311 04:28:59.425756    5803 main.go:141] libmachine: STDOUT: 
	I0311 04:28:59.425775    5803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:28:59.425805    5803 client.go:171] duration metric: took 225.095541ms to LocalClient.Create
	I0311 04:29:01.137276    5803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0311 04:29:01.182710    5803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0311 04:29:01.223708    5803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0311 04:29:01.267680    5803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0311 04:29:01.271746    5803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0311 04:29:01.274628    5803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0311 04:29:01.274857    5803 cache.go:162] opening:  /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0311 04:29:01.339141    5803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0311 04:29:01.339181    5803 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.165963209s
	I0311 04:29:01.339208    5803 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0311 04:29:01.426160    5803 start.go:128] duration metric: took 2.252401458s to createHost
	I0311 04:29:01.426208    5803 start.go:83] releasing machines lock for "no-preload-114000", held for 2.252505667s
	W0311 04:29:01.426252    5803 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:01.443070    5803 out.go:177] * Deleting "no-preload-114000" in qemu2 ...
	W0311 04:29:01.471671    5803 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:01.471710    5803 start.go:728] Will try again in 5 seconds ...
	I0311 04:29:03.970904    5803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0311 04:29:03.970990    5803 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.797685625s
	I0311 04:29:03.971015    5803 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0311 04:29:04.155100    5803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0311 04:29:04.155174    5803 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 4.981780709s
	I0311 04:29:04.155206    5803 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0311 04:29:04.215332    5803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0311 04:29:04.215375    5803 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.041980291s
	I0311 04:29:04.215441    5803 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0311 04:29:05.274017    5803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0311 04:29:05.274101    5803 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 6.100977959s
	I0311 04:29:05.274133    5803 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0311 04:29:05.382293    5803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0311 04:29:05.382338    5803 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 6.209245s
	I0311 04:29:05.382363    5803 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0311 04:29:06.471819    5803 start.go:360] acquireMachinesLock for no-preload-114000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:06.472203    5803 start.go:364] duration metric: took 309.625µs to acquireMachinesLock for "no-preload-114000"
	I0311 04:29:06.472331    5803 start.go:93] Provisioning new machine with config: &{Name:no-preload-114000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-114000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:29:06.472713    5803 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:29:06.479368    5803 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:29:06.528233    5803 start.go:159] libmachine.API.Create for "no-preload-114000" (driver="qemu2")
	I0311 04:29:06.528300    5803 client.go:168] LocalClient.Create starting
	I0311 04:29:06.528419    5803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:29:06.528483    5803 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:06.528499    5803 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:06.528564    5803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:29:06.528617    5803 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:06.528630    5803 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:06.529145    5803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:29:06.679182    5803 main.go:141] libmachine: Creating SSH key...
	I0311 04:29:06.732910    5803 main.go:141] libmachine: Creating Disk image...
	I0311 04:29:06.732916    5803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:29:06.733086    5803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2
	I0311 04:29:06.745543    5803 main.go:141] libmachine: STDOUT: 
	I0311 04:29:06.745571    5803 main.go:141] libmachine: STDERR: 
	I0311 04:29:06.745630    5803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2 +20000M
	I0311 04:29:06.756494    5803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:29:06.756518    5803 main.go:141] libmachine: STDERR: 
	I0311 04:29:06.756532    5803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2
	I0311 04:29:06.756535    5803 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:29:06.756572    5803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:47:aa:d6:97:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2
	I0311 04:29:06.758407    5803 main.go:141] libmachine: STDOUT: 
	I0311 04:29:06.758423    5803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:06.758442    5803 client.go:171] duration metric: took 230.138375ms to LocalClient.Create
	I0311 04:29:08.541790    5803 cache.go:157] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0311 04:29:08.541885    5803 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 9.368761625s
	I0311 04:29:08.541956    5803 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0311 04:29:08.542002    5803 cache.go:87] Successfully saved all images to host disk.
	I0311 04:29:08.759418    5803 start.go:128] duration metric: took 2.286682667s to createHost
	I0311 04:29:08.759475    5803 start.go:83] releasing machines lock for "no-preload-114000", held for 2.287298625s
	W0311 04:29:08.759758    5803 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-114000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-114000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:08.769270    5803 out.go:177] 
	W0311 04:29:08.773272    5803 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:08.773307    5803 out.go:239] * 
	* 
	W0311 04:29:08.776051    5803 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:29:08.786163    5803 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-114000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (65.627417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-114000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-114000 create -f testdata/busybox.yaml: exit status 1 (28.970791ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-114000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-114000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (32.170625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (32.442042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-114000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-114000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-114000 describe deploy/metrics-server -n kube-system: exit status 1 (26.738375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-114000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-114000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (32.015583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-114000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
E0311 04:29:15.863171    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-114000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.189143167s)

                                                
                                                
-- stdout --
	* [no-preload-114000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-114000" primary control-plane node in "no-preload-114000" cluster
	* Restarting existing qemu2 VM for "no-preload-114000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-114000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:13.031655    5881 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:13.031771    5881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:13.031774    5881 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:13.031777    5881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:13.031915    5881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:13.032945    5881 out.go:298] Setting JSON to false
	I0311 04:29:13.049266    5881 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3525,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:29:13.049334    5881 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:29:13.054425    5881 out.go:177] * [no-preload-114000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:29:13.061425    5881 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:29:13.064260    5881 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:29:13.061491    5881 notify.go:220] Checking for updates...
	I0311 04:29:13.070358    5881 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:29:13.071905    5881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:29:13.075402    5881 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:29:13.078339    5881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:29:13.081671    5881 config.go:182] Loaded profile config "no-preload-114000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0311 04:29:13.081935    5881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:29:13.086309    5881 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:29:13.094218    5881 start.go:297] selected driver: qemu2
	I0311 04:29:13.094223    5881 start.go:901] validating driver "qemu2" against &{Name:no-preload-114000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-114000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:13.094273    5881 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:29:13.096558    5881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:29:13.096612    5881 cni.go:84] Creating CNI manager for ""
	I0311 04:29:13.096620    5881 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:29:13.096647    5881 start.go:340] cluster config:
	{Name:no-preload-114000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-114000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:13.101092    5881 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:13.108452    5881 out.go:177] * Starting "no-preload-114000" primary control-plane node in "no-preload-114000" cluster
	I0311 04:29:13.112338    5881 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 04:29:13.112405    5881 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/no-preload-114000/config.json ...
	I0311 04:29:13.112443    5881 cache.go:107] acquiring lock: {Name:mk2f4032ff1030d1bcd8a6e7b64d0f5de14c576d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:13.112446    5881 cache.go:107] acquiring lock: {Name:mk9fc2a450dcdb6f014aa1cfb439c5555f2669ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:13.112464    5881 cache.go:107] acquiring lock: {Name:mk4cb1d26300f8018a1898e85b7a4813ab9e0c08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:13.112502    5881 cache.go:115] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0311 04:29:13.112515    5881 cache.go:115] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0311 04:29:13.112521    5881 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80.667µs
	I0311 04:29:13.112521    5881 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 82.416µs
	I0311 04:29:13.112528    5881 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0311 04:29:13.112532    5881 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0311 04:29:13.112533    5881 cache.go:115] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0311 04:29:13.112538    5881 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 89.666µs
	I0311 04:29:13.112560    5881 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0311 04:29:13.112538    5881 cache.go:107] acquiring lock: {Name:mkf39303b149c2088360fb3511a469232267b577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:13.112540    5881 cache.go:107] acquiring lock: {Name:mk98e530b5450d61c7157f208832e85464162a73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:13.112548    5881 cache.go:107] acquiring lock: {Name:mkff40b7bcd2fd4c59cfe6f4cc460bfa2f0d102d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:13.112648    5881 cache.go:115] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0311 04:29:13.112556    5881 cache.go:107] acquiring lock: {Name:mk78843ebb02b57c1501ad3277ed1e0b0eb8af43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:13.112652    5881 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 113µs
	I0311 04:29:13.112657    5881 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0311 04:29:13.112598    5881 cache.go:115] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0311 04:29:13.112665    5881 cache.go:115] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0311 04:29:13.112670    5881 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 132.167µs
	I0311 04:29:13.112674    5881 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0311 04:29:13.112673    5881 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 124.458µs
	I0311 04:29:13.112677    5881 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0311 04:29:13.112604    5881 cache.go:107] acquiring lock: {Name:mka2856f7ed8f9639ed63bbc57edb9908f68e759 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:13.112694    5881 cache.go:115] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0311 04:29:13.112699    5881 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 143.25µs
	I0311 04:29:13.112707    5881 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0311 04:29:13.112714    5881 cache.go:115] /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0311 04:29:13.112717    5881 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 140.167µs
	I0311 04:29:13.112725    5881 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0311 04:29:13.112729    5881 cache.go:87] Successfully saved all images to host disk.
	I0311 04:29:13.112842    5881 start.go:360] acquireMachinesLock for no-preload-114000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:13.112870    5881 start.go:364] duration metric: took 22.042µs to acquireMachinesLock for "no-preload-114000"
	I0311 04:29:13.112879    5881 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:29:13.112886    5881 fix.go:54] fixHost starting: 
	I0311 04:29:13.113011    5881 fix.go:112] recreateIfNeeded on no-preload-114000: state=Stopped err=<nil>
	W0311 04:29:13.113019    5881 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:29:13.121352    5881 out.go:177] * Restarting existing qemu2 VM for "no-preload-114000" ...
	I0311 04:29:13.125385    5881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:47:aa:d6:97:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2
	I0311 04:29:13.127456    5881 main.go:141] libmachine: STDOUT: 
	I0311 04:29:13.127484    5881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:13.127512    5881 fix.go:56] duration metric: took 14.627917ms for fixHost
	I0311 04:29:13.127517    5881 start.go:83] releasing machines lock for "no-preload-114000", held for 14.643458ms
	W0311 04:29:13.127522    5881 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:13.127557    5881 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:13.127562    5881 start.go:728] Will try again in 5 seconds ...
	I0311 04:29:18.129714    5881 start.go:360] acquireMachinesLock for no-preload-114000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:18.130033    5881 start.go:364] duration metric: took 238.458µs to acquireMachinesLock for "no-preload-114000"
	I0311 04:29:18.130163    5881 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:29:18.130187    5881 fix.go:54] fixHost starting: 
	I0311 04:29:18.130877    5881 fix.go:112] recreateIfNeeded on no-preload-114000: state=Stopped err=<nil>
	W0311 04:29:18.130904    5881 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:29:18.136403    5881 out.go:177] * Restarting existing qemu2 VM for "no-preload-114000" ...
	I0311 04:29:18.143559    5881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:47:aa:d6:97:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/no-preload-114000/disk.qcow2
	I0311 04:29:18.153475    5881 main.go:141] libmachine: STDOUT: 
	I0311 04:29:18.153549    5881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:18.153632    5881 fix.go:56] duration metric: took 23.452166ms for fixHost
	I0311 04:29:18.153650    5881 start.go:83] releasing machines lock for "no-preload-114000", held for 23.59525ms
	W0311 04:29:18.153792    5881 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-114000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-114000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:18.161305    5881 out.go:177] 
	W0311 04:29:18.165415    5881 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:18.165432    5881 out.go:239] * 
	* 
	W0311 04:29:18.167688    5881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:29:18.181350    5881 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-114000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (72.865625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-114000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (34.640458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-114000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-114000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-114000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.613375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-114000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-114000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (32.443583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-114000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (32.227125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-114000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-114000 --alsologtostderr -v=1: exit status 83 (43.4225ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-114000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-114000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:18.465668    5907 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:18.465809    5907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:18.465812    5907 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:18.465815    5907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:18.465944    5907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:18.466166    5907 out.go:298] Setting JSON to false
	I0311 04:29:18.466174    5907 mustload.go:65] Loading cluster: no-preload-114000
	I0311 04:29:18.466368    5907 config.go:182] Loaded profile config "no-preload-114000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0311 04:29:18.470818    5907 out.go:177] * The control-plane node no-preload-114000 host is not running: state=Stopped
	I0311 04:29:18.473837    5907 out.go:177]   To start a cluster, run: "minikube start -p no-preload-114000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-114000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (31.568542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (32.129375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-636000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-636000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.9400315s)

                                                
                                                
-- stdout --
	* [embed-certs-636000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-636000" primary control-plane node in "embed-certs-636000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-636000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:18.947267    5930 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:18.947402    5930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:18.947405    5930 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:18.947407    5930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:18.947542    5930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:18.948620    5930 out.go:298] Setting JSON to false
	I0311 04:29:18.964821    5930 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3530,"bootTime":1710153028,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:29:18.964888    5930 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:29:18.969433    5930 out.go:177] * [embed-certs-636000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:29:18.976561    5930 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:29:18.976607    5930 notify.go:220] Checking for updates...
	I0311 04:29:18.980472    5930 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:29:18.984554    5930 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:29:18.987600    5930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:29:18.990527    5930 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:29:18.993522    5930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:29:18.996920    5930 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:18.996983    5930 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:18.997026    5930 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:29:19.001428    5930 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:29:19.008532    5930 start.go:297] selected driver: qemu2
	I0311 04:29:19.008540    5930 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:29:19.008547    5930 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:29:19.010842    5930 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:29:19.014553    5930 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:29:19.017635    5930 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:29:19.017677    5930 cni.go:84] Creating CNI manager for ""
	I0311 04:29:19.017685    5930 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:29:19.017690    5930 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:29:19.017726    5930 start.go:340] cluster config:
	{Name:embed-certs-636000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:19.022317    5930 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:19.029534    5930 out.go:177] * Starting "embed-certs-636000" primary control-plane node in "embed-certs-636000" cluster
	I0311 04:29:19.033553    5930 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:29:19.033568    5930 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:29:19.033581    5930 cache.go:56] Caching tarball of preloaded images
	I0311 04:29:19.033640    5930 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:29:19.033647    5930 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:29:19.033722    5930 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/embed-certs-636000/config.json ...
	I0311 04:29:19.033733    5930 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/embed-certs-636000/config.json: {Name:mk2fee37df304af47ade319efd655897ad01cae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:29:19.033960    5930 start.go:360] acquireMachinesLock for embed-certs-636000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:19.033993    5930 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "embed-certs-636000"
	I0311 04:29:19.034005    5930 start.go:93] Provisioning new machine with config: &{Name:embed-certs-636000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:29:19.034044    5930 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:29:19.042516    5930 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:29:19.060288    5930 start.go:159] libmachine.API.Create for "embed-certs-636000" (driver="qemu2")
	I0311 04:29:19.060321    5930 client.go:168] LocalClient.Create starting
	I0311 04:29:19.060391    5930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:29:19.060420    5930 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:19.060430    5930 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:19.060480    5930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:29:19.060503    5930 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:19.060509    5930 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:19.060876    5930 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:29:19.200543    5930 main.go:141] libmachine: Creating SSH key...
	I0311 04:29:19.427568    5930 main.go:141] libmachine: Creating Disk image...
	I0311 04:29:19.427578    5930 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:29:19.427785    5930 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2
	I0311 04:29:19.440384    5930 main.go:141] libmachine: STDOUT: 
	I0311 04:29:19.440407    5930 main.go:141] libmachine: STDERR: 
	I0311 04:29:19.440464    5930 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2 +20000M
	I0311 04:29:19.451369    5930 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:29:19.451396    5930 main.go:141] libmachine: STDERR: 
	I0311 04:29:19.451414    5930 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2
	I0311 04:29:19.451420    5930 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:29:19.451446    5930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:2b:70:a4:e9:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2
	I0311 04:29:19.453237    5930 main.go:141] libmachine: STDOUT: 
	I0311 04:29:19.453256    5930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:19.453273    5930 client.go:171] duration metric: took 392.953916ms to LocalClient.Create
	I0311 04:29:21.455476    5930 start.go:128] duration metric: took 2.421449542s to createHost
	I0311 04:29:21.455573    5930 start.go:83] releasing machines lock for "embed-certs-636000", held for 2.421623208s
	W0311 04:29:21.455638    5930 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:21.467728    5930 out.go:177] * Deleting "embed-certs-636000" in qemu2 ...
	W0311 04:29:21.497320    5930 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:21.497353    5930 start.go:728] Will try again in 5 seconds ...
	I0311 04:29:26.499553    5930 start.go:360] acquireMachinesLock for embed-certs-636000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:26.500024    5930 start.go:364] duration metric: took 364.458µs to acquireMachinesLock for "embed-certs-636000"
	I0311 04:29:26.500168    5930 start.go:93] Provisioning new machine with config: &{Name:embed-certs-636000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:29:26.500488    5930 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:29:26.512120    5930 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:29:26.563939    5930 start.go:159] libmachine.API.Create for "embed-certs-636000" (driver="qemu2")
	I0311 04:29:26.564002    5930 client.go:168] LocalClient.Create starting
	I0311 04:29:26.564125    5930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:29:26.564196    5930 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:26.564215    5930 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:26.564286    5930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:29:26.564344    5930 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:26.564362    5930 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:26.564968    5930 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:29:26.717317    5930 main.go:141] libmachine: Creating SSH key...
	I0311 04:29:26.786042    5930 main.go:141] libmachine: Creating Disk image...
	I0311 04:29:26.786048    5930 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:29:26.786235    5930 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2
	I0311 04:29:26.798720    5930 main.go:141] libmachine: STDOUT: 
	I0311 04:29:26.798742    5930 main.go:141] libmachine: STDERR: 
	I0311 04:29:26.798787    5930 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2 +20000M
	I0311 04:29:26.809657    5930 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:29:26.809675    5930 main.go:141] libmachine: STDERR: 
	I0311 04:29:26.809687    5930 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2
	I0311 04:29:26.809693    5930 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:29:26.809730    5930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:8e:91:37:88:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2
	I0311 04:29:26.811493    5930 main.go:141] libmachine: STDOUT: 
	I0311 04:29:26.811511    5930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:26.811525    5930 client.go:171] duration metric: took 247.522042ms to LocalClient.Create
	I0311 04:29:28.813657    5930 start.go:128] duration metric: took 2.31318875s to createHost
	I0311 04:29:28.813723    5930 start.go:83] releasing machines lock for "embed-certs-636000", held for 2.313725459s
	W0311 04:29:28.814031    5930 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:28.822787    5930 out.go:177] 
	W0311 04:29:28.829900    5930 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:28.829926    5930 out.go:239] * 
	* 
	W0311 04:29:28.832703    5930 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:29:28.841677    5930 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-636000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (70.737416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-636000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-636000 create -f testdata/busybox.yaml: exit status 1 (29.254209ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-636000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-636000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (32.435541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (32.144875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-636000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-636000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-636000 describe deploy/metrics-server -n kube-system: exit status 1 (27.064125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-636000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-636000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (32.597541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-636000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-636000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.191105791s)

                                                
                                                
-- stdout --
	* [embed-certs-636000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-636000" primary control-plane node in "embed-certs-636000" cluster
	* Restarting existing qemu2 VM for "embed-certs-636000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-636000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:31.350371    5972 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:31.350512    5972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:31.350516    5972 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:31.350518    5972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:31.350638    5972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:31.351611    5972 out.go:298] Setting JSON to false
	I0311 04:29:31.367692    5972 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3543,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:29:31.367770    5972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:29:31.372648    5972 out.go:177] * [embed-certs-636000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:29:31.379617    5972 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:29:31.379674    5972 notify.go:220] Checking for updates...
	I0311 04:29:31.387646    5972 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:29:31.390684    5972 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:29:31.393650    5972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:29:31.396604    5972 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:29:31.399605    5972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:29:31.403000    5972 config.go:182] Loaded profile config "embed-certs-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:31.403250    5972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:29:31.407618    5972 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:29:31.414627    5972 start.go:297] selected driver: qemu2
	I0311 04:29:31.414639    5972 start.go:901] validating driver "qemu2" against &{Name:embed-certs-636000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:31.414699    5972 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:29:31.416983    5972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:29:31.417030    5972 cni.go:84] Creating CNI manager for ""
	I0311 04:29:31.417037    5972 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:29:31.417086    5972 start.go:340] cluster config:
	{Name:embed-certs-636000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-636000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:31.421435    5972 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:31.429644    5972 out.go:177] * Starting "embed-certs-636000" primary control-plane node in "embed-certs-636000" cluster
	I0311 04:29:31.432548    5972 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:29:31.432563    5972 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:29:31.432575    5972 cache.go:56] Caching tarball of preloaded images
	I0311 04:29:31.432633    5972 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:29:31.432639    5972 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:29:31.432711    5972 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/embed-certs-636000/config.json ...
	I0311 04:29:31.433203    5972 start.go:360] acquireMachinesLock for embed-certs-636000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:31.433229    5972 start.go:364] duration metric: took 20.042µs to acquireMachinesLock for "embed-certs-636000"
	I0311 04:29:31.433236    5972 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:29:31.433241    5972 fix.go:54] fixHost starting: 
	I0311 04:29:31.433355    5972 fix.go:112] recreateIfNeeded on embed-certs-636000: state=Stopped err=<nil>
	W0311 04:29:31.433364    5972 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:29:31.437652    5972 out.go:177] * Restarting existing qemu2 VM for "embed-certs-636000" ...
	I0311 04:29:31.444630    5972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:8e:91:37:88:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2
	I0311 04:29:31.446716    5972 main.go:141] libmachine: STDOUT: 
	I0311 04:29:31.446736    5972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:31.446764    5972 fix.go:56] duration metric: took 13.523125ms for fixHost
	I0311 04:29:31.446769    5972 start.go:83] releasing machines lock for "embed-certs-636000", held for 13.537084ms
	W0311 04:29:31.446776    5972 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:31.446813    5972 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:31.446818    5972 start.go:728] Will try again in 5 seconds ...
	I0311 04:29:36.448926    5972 start.go:360] acquireMachinesLock for embed-certs-636000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:36.449277    5972 start.go:364] duration metric: took 218.458µs to acquireMachinesLock for "embed-certs-636000"
	I0311 04:29:36.449391    5972 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:29:36.449416    5972 fix.go:54] fixHost starting: 
	I0311 04:29:36.450135    5972 fix.go:112] recreateIfNeeded on embed-certs-636000: state=Stopped err=<nil>
	W0311 04:29:36.450167    5972 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:29:36.455523    5972 out.go:177] * Restarting existing qemu2 VM for "embed-certs-636000" ...
	I0311 04:29:36.463690    5972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:8e:91:37:88:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/embed-certs-636000/disk.qcow2
	I0311 04:29:36.473261    5972 main.go:141] libmachine: STDOUT: 
	I0311 04:29:36.473325    5972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:36.473405    5972 fix.go:56] duration metric: took 23.992583ms for fixHost
	I0311 04:29:36.473419    5972 start.go:83] releasing machines lock for "embed-certs-636000", held for 24.117042ms
	W0311 04:29:36.473574    5972 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-636000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-636000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:36.482403    5972 out.go:177] 
	W0311 04:29:36.486644    5972 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:36.486690    5972 out.go:239] * 
	* 
	W0311 04:29:36.489243    5972 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:29:36.496559    5972 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-636000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (66.612958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-636000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (33.389792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-636000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-636000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-636000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.74125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-636000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-636000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (30.30925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-636000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (31.0255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-636000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-636000 --alsologtostderr -v=1: exit status 83 (42.475542ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-636000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-636000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:36.771707    6001 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:36.771860    6001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:36.771863    6001 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:36.771866    6001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:36.771999    6001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:36.772208    6001 out.go:298] Setting JSON to false
	I0311 04:29:36.772215    6001 mustload.go:65] Loading cluster: embed-certs-636000
	I0311 04:29:36.772383    6001 config.go:182] Loaded profile config "embed-certs-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:36.776421    6001 out.go:177] * The control-plane node embed-certs-636000 host is not running: state=Stopped
	I0311 04:29:36.780462    6001 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-636000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-636000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (31.125666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (31.271334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.832111542s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-735000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-735000" primary control-plane node in "default-k8s-diff-port-735000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-735000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:37.482285    6036 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:37.482413    6036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:37.482417    6036 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:37.482419    6036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:37.482539    6036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:37.483659    6036 out.go:298] Setting JSON to false
	I0311 04:29:37.499868    6036 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3549,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:29:37.499926    6036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:29:37.504677    6036 out.go:177] * [default-k8s-diff-port-735000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:29:37.511565    6036 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:29:37.515628    6036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:29:37.511638    6036 notify.go:220] Checking for updates...
	I0311 04:29:37.522548    6036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:29:37.525576    6036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:29:37.528506    6036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:29:37.531553    6036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:29:37.534957    6036 config.go:182] Loaded profile config "cert-expiration-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:37.535022    6036 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:37.535070    6036 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:29:37.538512    6036 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:29:37.545560    6036 start.go:297] selected driver: qemu2
	I0311 04:29:37.545568    6036 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:29:37.545575    6036 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:29:37.547865    6036 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 04:29:37.549453    6036 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:29:37.552598    6036 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:29:37.552636    6036 cni.go:84] Creating CNI manager for ""
	I0311 04:29:37.552643    6036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:29:37.552647    6036 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:29:37.552671    6036 start.go:340] cluster config:
	{Name:default-k8s-diff-port-735000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-735000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:37.557148    6036 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:37.564548    6036 out.go:177] * Starting "default-k8s-diff-port-735000" primary control-plane node in "default-k8s-diff-port-735000" cluster
	I0311 04:29:37.568563    6036 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:29:37.568577    6036 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:29:37.568599    6036 cache.go:56] Caching tarball of preloaded images
	I0311 04:29:37.568661    6036 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:29:37.568668    6036 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:29:37.568732    6036 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/default-k8s-diff-port-735000/config.json ...
	I0311 04:29:37.568744    6036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/default-k8s-diff-port-735000/config.json: {Name:mk5afe6d9d6b024d5743ee0241fd42f525b48cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:29:37.568980    6036 start.go:360] acquireMachinesLock for default-k8s-diff-port-735000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:37.569015    6036 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "default-k8s-diff-port-735000"
	I0311 04:29:37.569027    6036 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-735000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-735000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:29:37.569063    6036 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:29:37.577530    6036 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:29:37.595966    6036 start.go:159] libmachine.API.Create for "default-k8s-diff-port-735000" (driver="qemu2")
	I0311 04:29:37.596002    6036 client.go:168] LocalClient.Create starting
	I0311 04:29:37.596115    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:29:37.596154    6036 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:37.596165    6036 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:37.596209    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:29:37.596232    6036 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:37.596240    6036 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:37.596626    6036 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:29:37.755234    6036 main.go:141] libmachine: Creating SSH key...
	I0311 04:29:37.802457    6036 main.go:141] libmachine: Creating Disk image...
	I0311 04:29:37.802463    6036 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:29:37.802629    6036 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2
	I0311 04:29:37.814688    6036 main.go:141] libmachine: STDOUT: 
	I0311 04:29:37.814713    6036 main.go:141] libmachine: STDERR: 
	I0311 04:29:37.814762    6036 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2 +20000M
	I0311 04:29:37.825240    6036 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:29:37.825256    6036 main.go:141] libmachine: STDERR: 
	I0311 04:29:37.825269    6036 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2
	I0311 04:29:37.825274    6036 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:29:37.825308    6036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:81:00:dd:85:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2
	I0311 04:29:37.826950    6036 main.go:141] libmachine: STDOUT: 
	I0311 04:29:37.826967    6036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:37.826989    6036 client.go:171] duration metric: took 230.983083ms to LocalClient.Create
	I0311 04:29:39.829320    6036 start.go:128] duration metric: took 2.260271875s to createHost
	I0311 04:29:39.829432    6036 start.go:83] releasing machines lock for "default-k8s-diff-port-735000", held for 2.26042225s
	W0311 04:29:39.829535    6036 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:39.842536    6036 out.go:177] * Deleting "default-k8s-diff-port-735000" in qemu2 ...
	W0311 04:29:39.868144    6036 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:39.868184    6036 start.go:728] Will try again in 5 seconds ...
	I0311 04:29:44.870348    6036 start.go:360] acquireMachinesLock for default-k8s-diff-port-735000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:44.870772    6036 start.go:364] duration metric: took 308.083µs to acquireMachinesLock for "default-k8s-diff-port-735000"
	I0311 04:29:44.870897    6036 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-735000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-735000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:29:44.871217    6036 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:29:44.880825    6036 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:29:44.927769    6036 start.go:159] libmachine.API.Create for "default-k8s-diff-port-735000" (driver="qemu2")
	I0311 04:29:44.927819    6036 client.go:168] LocalClient.Create starting
	I0311 04:29:44.927931    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:29:44.927991    6036 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:44.928012    6036 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:44.928080    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:29:44.928123    6036 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:44.928135    6036 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:44.929291    6036 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:29:45.086642    6036 main.go:141] libmachine: Creating SSH key...
	I0311 04:29:45.212151    6036 main.go:141] libmachine: Creating Disk image...
	I0311 04:29:45.212161    6036 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:29:45.212341    6036 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2
	I0311 04:29:45.224711    6036 main.go:141] libmachine: STDOUT: 
	I0311 04:29:45.224730    6036 main.go:141] libmachine: STDERR: 
	I0311 04:29:45.224783    6036 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2 +20000M
	I0311 04:29:45.235520    6036 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:29:45.235535    6036 main.go:141] libmachine: STDERR: 
	I0311 04:29:45.235546    6036 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2
	I0311 04:29:45.235549    6036 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:29:45.235585    6036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b3:5d:7c:99:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2
	I0311 04:29:45.237259    6036 main.go:141] libmachine: STDOUT: 
	I0311 04:29:45.237274    6036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:45.237292    6036 client.go:171] duration metric: took 309.466458ms to LocalClient.Create
	I0311 04:29:47.239520    6036 start.go:128] duration metric: took 2.368303708s to createHost
	I0311 04:29:47.239602    6036 start.go:83] releasing machines lock for "default-k8s-diff-port-735000", held for 2.368857s
	W0311 04:29:47.239955    6036 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-735000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-735000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:47.250437    6036 out.go:177] 
	W0311 04:29:47.256561    6036 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:47.256619    6036 out.go:239] * 
	* 
	W0311 04:29:47.259203    6036 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:29:47.269454    6036 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (67.266583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-306000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-306000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.886548417s)

                                                
                                                
-- stdout --
	* [newest-cni-306000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-306000" primary control-plane node in "newest-cni-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:40.708342    6053 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:40.708468    6053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:40.708471    6053 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:40.708473    6053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:40.708582    6053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:40.709644    6053 out.go:298] Setting JSON to false
	I0311 04:29:40.725861    6053 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3552,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:29:40.725987    6053 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:29:40.731298    6053 out.go:177] * [newest-cni-306000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:29:40.738400    6053 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:29:40.738479    6053 notify.go:220] Checking for updates...
	I0311 04:29:40.747319    6053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:29:40.755250    6053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:29:40.763374    6053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:29:40.767303    6053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:29:40.770362    6053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:29:40.773780    6053 config.go:182] Loaded profile config "default-k8s-diff-port-735000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:40.773854    6053 config.go:182] Loaded profile config "multinode-976000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:40.773907    6053 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:29:40.778301    6053 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 04:29:40.785385    6053 start.go:297] selected driver: qemu2
	I0311 04:29:40.785393    6053 start.go:901] validating driver "qemu2" against <nil>
	I0311 04:29:40.785401    6053 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:29:40.787880    6053 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0311 04:29:40.787908    6053 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0311 04:29:40.792300    6053 out.go:177] * Automatically selected the socket_vmnet network
	I0311 04:29:40.799472    6053 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0311 04:29:40.799522    6053 cni.go:84] Creating CNI manager for ""
	I0311 04:29:40.799531    6053 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:29:40.799537    6053 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 04:29:40.799583    6053 start.go:340] cluster config:
	{Name:newest-cni-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:40.804872    6053 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:40.812362    6053 out.go:177] * Starting "newest-cni-306000" primary control-plane node in "newest-cni-306000" cluster
	I0311 04:29:40.816150    6053 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 04:29:40.816168    6053 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0311 04:29:40.816177    6053 cache.go:56] Caching tarball of preloaded images
	I0311 04:29:40.816251    6053 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:29:40.816258    6053 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0311 04:29:40.816333    6053 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/newest-cni-306000/config.json ...
	I0311 04:29:40.816345    6053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/newest-cni-306000/config.json: {Name:mk347053f5c0adb1b292957d87fa4db04214ac88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 04:29:40.816588    6053 start.go:360] acquireMachinesLock for newest-cni-306000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:40.816625    6053 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "newest-cni-306000"
	I0311 04:29:40.816638    6053 start.go:93] Provisioning new machine with config: &{Name:newest-cni-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:29:40.816674    6053 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:29:40.820406    6053 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:29:40.839043    6053 start.go:159] libmachine.API.Create for "newest-cni-306000" (driver="qemu2")
	I0311 04:29:40.839075    6053 client.go:168] LocalClient.Create starting
	I0311 04:29:40.839143    6053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:29:40.839174    6053 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:40.839183    6053 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:40.839233    6053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:29:40.839258    6053 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:40.839267    6053 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:40.839622    6053 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:29:40.980764    6053 main.go:141] libmachine: Creating SSH key...
	I0311 04:29:41.155794    6053 main.go:141] libmachine: Creating Disk image...
	I0311 04:29:41.155802    6053 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:29:41.155976    6053 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2
	I0311 04:29:41.168565    6053 main.go:141] libmachine: STDOUT: 
	I0311 04:29:41.168584    6053 main.go:141] libmachine: STDERR: 
	I0311 04:29:41.168647    6053 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2 +20000M
	I0311 04:29:41.179318    6053 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:29:41.179337    6053 main.go:141] libmachine: STDERR: 
	I0311 04:29:41.179354    6053 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2
	I0311 04:29:41.179360    6053 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:29:41.179394    6053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:29:63:f4:5d:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2
	I0311 04:29:41.181096    6053 main.go:141] libmachine: STDOUT: 
	I0311 04:29:41.181113    6053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:41.181131    6053 client.go:171] duration metric: took 342.057667ms to LocalClient.Create
	I0311 04:29:43.183325    6053 start.go:128] duration metric: took 2.366679417s to createHost
	I0311 04:29:43.183441    6053 start.go:83] releasing machines lock for "newest-cni-306000", held for 2.366856791s
	W0311 04:29:43.183487    6053 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:43.194492    6053 out.go:177] * Deleting "newest-cni-306000" in qemu2 ...
	W0311 04:29:43.223317    6053 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:43.223350    6053 start.go:728] Will try again in 5 seconds ...
	I0311 04:29:48.225442    6053 start.go:360] acquireMachinesLock for newest-cni-306000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:48.225746    6053 start.go:364] duration metric: took 226.75µs to acquireMachinesLock for "newest-cni-306000"
	I0311 04:29:48.225858    6053 start.go:93] Provisioning new machine with config: &{Name:newest-cni-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 04:29:48.226103    6053 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 04:29:48.235751    6053 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 04:29:48.279363    6053 start.go:159] libmachine.API.Create for "newest-cni-306000" (driver="qemu2")
	I0311 04:29:48.279414    6053 client.go:168] LocalClient.Create starting
	I0311 04:29:48.279516    6053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/ca.pem
	I0311 04:29:48.279564    6053 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:48.279587    6053 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:48.279665    6053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18350-986/.minikube/certs/cert.pem
	I0311 04:29:48.279705    6053 main.go:141] libmachine: Decoding PEM data...
	I0311 04:29:48.279721    6053 main.go:141] libmachine: Parsing certificate...
	I0311 04:29:48.280377    6053 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18350-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 04:29:48.429538    6053 main.go:141] libmachine: Creating SSH key...
	I0311 04:29:48.489558    6053 main.go:141] libmachine: Creating Disk image...
	I0311 04:29:48.489565    6053 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 04:29:48.489735    6053 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2
	I0311 04:29:48.502202    6053 main.go:141] libmachine: STDOUT: 
	I0311 04:29:48.502222    6053 main.go:141] libmachine: STDERR: 
	I0311 04:29:48.502274    6053 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2 +20000M
	I0311 04:29:48.513095    6053 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 04:29:48.513107    6053 main.go:141] libmachine: STDERR: 
	I0311 04:29:48.513117    6053 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2
	I0311 04:29:48.513128    6053 main.go:141] libmachine: Starting QEMU VM...
	I0311 04:29:48.513161    6053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:5c:5c:ad:62:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2
	I0311 04:29:48.514890    6053 main.go:141] libmachine: STDOUT: 
	I0311 04:29:48.514904    6053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:48.514915    6053 client.go:171] duration metric: took 235.500834ms to LocalClient.Create
	I0311 04:29:50.517074    6053 start.go:128] duration metric: took 2.290981709s to createHost
	I0311 04:29:50.517188    6053 start.go:83] releasing machines lock for "newest-cni-306000", held for 2.291441s
	W0311 04:29:50.517560    6053 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:50.531139    6053 out.go:177] 
	W0311 04:29:50.535193    6053 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:50.535219    6053 out.go:239] * 
	* 
	W0311 04:29:50.538102    6053 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:29:50.547128    6053 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-306000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000: exit status 7 (63.70575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-306000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-735000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-735000 create -f testdata/busybox.yaml: exit status 1 (29.0685ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-735000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-735000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (31.516791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (31.02425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-735000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-735000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-735000 describe deploy/metrics-server -n kube-system: exit status 1 (26.786209ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-735000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-735000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (31.690708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.976760917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-735000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-735000" primary control-plane node in "default-k8s-diff-port-735000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-735000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-735000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:49.657743    6101 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:49.657873    6101 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:49.657876    6101 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:49.657878    6101 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:49.657998    6101 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:49.658972    6101 out.go:298] Setting JSON to false
	I0311 04:29:49.675014    6101 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3561,"bootTime":1710153028,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:29:49.675083    6101 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:29:49.679718    6101 out.go:177] * [default-k8s-diff-port-735000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:29:49.686688    6101 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:29:49.686727    6101 notify.go:220] Checking for updates...
	I0311 04:29:49.694628    6101 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:29:49.701669    6101 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:29:49.704688    6101 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:29:49.707624    6101 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:29:49.711590    6101 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:29:49.715042    6101 config.go:182] Loaded profile config "default-k8s-diff-port-735000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:49.715307    6101 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:29:49.719622    6101 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:29:49.726667    6101 start.go:297] selected driver: qemu2
	I0311 04:29:49.726675    6101 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-735000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-735000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:49.726742    6101 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:29:49.729215    6101 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 04:29:49.729263    6101 cni.go:84] Creating CNI manager for ""
	I0311 04:29:49.729271    6101 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:29:49.729301    6101 start.go:340] cluster config:
	{Name:default-k8s-diff-port-735000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-735000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:49.733935    6101 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:49.742668    6101 out.go:177] * Starting "default-k8s-diff-port-735000" primary control-plane node in "default-k8s-diff-port-735000" cluster
	I0311 04:29:49.746635    6101 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 04:29:49.746651    6101 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 04:29:49.746663    6101 cache.go:56] Caching tarball of preloaded images
	I0311 04:29:49.746747    6101 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:29:49.746753    6101 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 04:29:49.746834    6101 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/default-k8s-diff-port-735000/config.json ...
	I0311 04:29:49.747315    6101 start.go:360] acquireMachinesLock for default-k8s-diff-port-735000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:50.517321    6101 start.go:364] duration metric: took 769.994875ms to acquireMachinesLock for "default-k8s-diff-port-735000"
	I0311 04:29:50.517422    6101 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:29:50.517478    6101 fix.go:54] fixHost starting: 
	I0311 04:29:50.518195    6101 fix.go:112] recreateIfNeeded on default-k8s-diff-port-735000: state=Stopped err=<nil>
	W0311 04:29:50.518235    6101 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:29:50.531119    6101 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-735000" ...
	I0311 04:29:50.535366    6101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b3:5d:7c:99:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2
	I0311 04:29:50.545271    6101 main.go:141] libmachine: STDOUT: 
	I0311 04:29:50.545355    6101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:50.545480    6101 fix.go:56] duration metric: took 28.012667ms for fixHost
	I0311 04:29:50.545504    6101 start.go:83] releasing machines lock for "default-k8s-diff-port-735000", held for 28.1445ms
	W0311 04:29:50.545542    6101 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:50.545718    6101 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:50.545740    6101 start.go:728] Will try again in 5 seconds ...
	I0311 04:29:55.547910    6101 start.go:360] acquireMachinesLock for default-k8s-diff-port-735000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:55.548329    6101 start.go:364] duration metric: took 308.708µs to acquireMachinesLock for "default-k8s-diff-port-735000"
	I0311 04:29:55.548413    6101 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:29:55.548471    6101 fix.go:54] fixHost starting: 
	I0311 04:29:55.549156    6101 fix.go:112] recreateIfNeeded on default-k8s-diff-port-735000: state=Stopped err=<nil>
	W0311 04:29:55.549185    6101 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:29:55.554811    6101 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-735000" ...
	I0311 04:29:55.558889    6101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b3:5d:7c:99:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/default-k8s-diff-port-735000/disk.qcow2
	I0311 04:29:55.568448    6101 main.go:141] libmachine: STDOUT: 
	I0311 04:29:55.568526    6101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:55.568592    6101 fix.go:56] duration metric: took 20.163125ms for fixHost
	I0311 04:29:55.568611    6101 start.go:83] releasing machines lock for "default-k8s-diff-port-735000", held for 20.260958ms
	W0311 04:29:55.568792    6101 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-735000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-735000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:55.577702    6101 out.go:177] 
	W0311 04:29:55.580782    6101 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:55.580816    6101 out.go:239] * 
	* 
	W0311 04:29:55.583231    6101 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:29:55.591709    6101 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-735000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (69.916917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-306000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-306000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.179266958s)

                                                
                                                
-- stdout --
	* [newest-cni-306000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-306000" primary control-plane node in "newest-cni-306000" cluster
	* Restarting existing qemu2 VM for "newest-cni-306000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-306000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:53.941913    6136 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:53.942039    6136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:53.942042    6136 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:53.942045    6136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:53.942178    6136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:53.943191    6136 out.go:298] Setting JSON to false
	I0311 04:29:53.959324    6136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3565,"bootTime":1710153028,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 04:29:53.959382    6136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 04:29:53.964623    6136 out.go:177] * [newest-cni-306000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 04:29:53.971860    6136 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 04:29:53.971911    6136 notify.go:220] Checking for updates...
	I0311 04:29:53.974812    6136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 04:29:53.978788    6136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 04:29:53.981774    6136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 04:29:53.984702    6136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 04:29:53.987768    6136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 04:29:53.991120    6136 config.go:182] Loaded profile config "newest-cni-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0311 04:29:53.991376    6136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 04:29:53.994750    6136 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 04:29:54.001766    6136 start.go:297] selected driver: qemu2
	I0311 04:29:54.001776    6136 start.go:901] validating driver "qemu2" against &{Name:newest-cni-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:54.001836    6136 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 04:29:54.004155    6136 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0311 04:29:54.004203    6136 cni.go:84] Creating CNI manager for ""
	I0311 04:29:54.004210    6136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 04:29:54.004238    6136 start.go:340] cluster config:
	{Name:newest-cni-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-306000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 04:29:54.008559    6136 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 04:29:54.016733    6136 out.go:177] * Starting "newest-cni-306000" primary control-plane node in "newest-cni-306000" cluster
	I0311 04:29:54.019694    6136 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 04:29:54.019707    6136 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0311 04:29:54.019715    6136 cache.go:56] Caching tarball of preloaded images
	I0311 04:29:54.019772    6136 preload.go:173] Found /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 04:29:54.019777    6136 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0311 04:29:54.019858    6136 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/newest-cni-306000/config.json ...
	I0311 04:29:54.020367    6136 start.go:360] acquireMachinesLock for newest-cni-306000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:54.020395    6136 start.go:364] duration metric: took 21.333µs to acquireMachinesLock for "newest-cni-306000"
	I0311 04:29:54.020402    6136 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:29:54.020409    6136 fix.go:54] fixHost starting: 
	I0311 04:29:54.020537    6136 fix.go:112] recreateIfNeeded on newest-cni-306000: state=Stopped err=<nil>
	W0311 04:29:54.020545    6136 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:29:54.024729    6136 out.go:177] * Restarting existing qemu2 VM for "newest-cni-306000" ...
	I0311 04:29:54.027724    6136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:5c:5c:ad:62:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2
	I0311 04:29:54.029773    6136 main.go:141] libmachine: STDOUT: 
	I0311 04:29:54.029799    6136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:54.029826    6136 fix.go:56] duration metric: took 9.417917ms for fixHost
	I0311 04:29:54.029830    6136 start.go:83] releasing machines lock for "newest-cni-306000", held for 9.431666ms
	W0311 04:29:54.029837    6136 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:54.029886    6136 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:54.029891    6136 start.go:728] Will try again in 5 seconds ...
	I0311 04:29:59.032001    6136 start.go:360] acquireMachinesLock for newest-cni-306000: {Name:mk2f9a88bc6916eb492036071931e8825bfd7634 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 04:29:59.032477    6136 start.go:364] duration metric: took 364.042µs to acquireMachinesLock for "newest-cni-306000"
	I0311 04:29:59.032631    6136 start.go:96] Skipping create...Using existing machine configuration
	I0311 04:29:59.032654    6136 fix.go:54] fixHost starting: 
	I0311 04:29:59.033519    6136 fix.go:112] recreateIfNeeded on newest-cni-306000: state=Stopped err=<nil>
	W0311 04:29:59.033544    6136 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 04:29:59.038138    6136 out.go:177] * Restarting existing qemu2 VM for "newest-cni-306000" ...
	I0311 04:29:59.044260    6136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:5c:5c:ad:62:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18350-986/.minikube/machines/newest-cni-306000/disk.qcow2
	I0311 04:29:59.054881    6136 main.go:141] libmachine: STDOUT: 
	I0311 04:29:59.054950    6136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 04:29:59.055044    6136 fix.go:56] duration metric: took 22.393583ms for fixHost
	I0311 04:29:59.055065    6136 start.go:83] releasing machines lock for "newest-cni-306000", held for 22.564292ms
	W0311 04:29:59.055293    6136 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-306000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-306000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 04:29:59.063986    6136 out.go:177] 
	W0311 04:29:59.067041    6136 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 04:29:59.067065    6136 out.go:239] * 
	* 
	W0311 04:29:59.069456    6136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 04:29:59.082028    6136 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-306000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000: exit status 7 (70.060792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-306000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-735000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (33.649875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-735000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-735000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-735000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.023875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-735000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-735000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (31.073917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-735000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (30.979625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-735000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-735000 --alsologtostderr -v=1: exit status 83 (42.603584ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-735000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-735000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:55.871051    6155 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:55.871182    6155 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:55.871186    6155 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:55.871188    6155 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:55.871311    6155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:55.871520    6155 out.go:298] Setting JSON to false
	I0311 04:29:55.871527    6155 mustload.go:65] Loading cluster: default-k8s-diff-port-735000
	I0311 04:29:55.871745    6155 config.go:182] Loaded profile config "default-k8s-diff-port-735000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 04:29:55.875712    6155 out.go:177] * The control-plane node default-k8s-diff-port-735000 host is not running: state=Stopped
	I0311 04:29:55.879521    6155 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-735000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-735000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (31.185542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (31.014917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-735000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-306000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000: exit status 7 (31.753042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-306000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-306000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-306000 --alsologtostderr -v=1: exit status 83 (42.6245ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-306000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-306000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 04:29:59.270240    6185 out.go:291] Setting OutFile to fd 1 ...
	I0311 04:29:59.270382    6185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:59.270385    6185 out.go:304] Setting ErrFile to fd 2...
	I0311 04:29:59.270387    6185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 04:29:59.270514    6185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 04:29:59.270741    6185 out.go:298] Setting JSON to false
	I0311 04:29:59.270747    6185 mustload.go:65] Loading cluster: newest-cni-306000
	I0311 04:29:59.270939    6185 config.go:182] Loaded profile config "newest-cni-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0311 04:29:59.275634    6185 out.go:177] * The control-plane node newest-cni-306000 host is not running: state=Stopped
	I0311 04:29:59.278488    6185 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-306000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-306000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000: exit status 7 (32.438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-306000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000: exit status 7 (32.54675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-306000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (160/281)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 21.88
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.24
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.29.0-rc.2/json-events 19.53
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.22
30 TestBinaryMirror 0.42
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 208.96
38 TestAddons/parallel/Registry 17.9
40 TestAddons/parallel/InspektorGadget 10.22
41 TestAddons/parallel/MetricsServer 5.45
44 TestAddons/parallel/CSI 56.57
45 TestAddons/parallel/Headlamp 12.51
46 TestAddons/parallel/CloudSpanner 5.18
47 TestAddons/parallel/LocalPath 51.81
48 TestAddons/parallel/NvidiaDevicePlugin 6.15
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.07
53 TestAddons/StoppedEnableDisable 12.39
61 TestHyperKitDriverInstallOrUpdate 9.19
64 TestErrorSpam/setup 31.38
65 TestErrorSpam/start 0.37
66 TestErrorSpam/status 0.26
67 TestErrorSpam/pause 0.68
68 TestErrorSpam/unpause 0.58
69 TestErrorSpam/stop 64.28
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 47.41
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 35.2
76 TestFunctional/serial/KubeContext 0.03
77 TestFunctional/serial/KubectlGetPods 0.04
80 TestFunctional/serial/CacheCmd/cache/add_remote 8.91
81 TestFunctional/serial/CacheCmd/cache/add_local 1.21
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
83 TestFunctional/serial/CacheCmd/cache/list 0.04
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
86 TestFunctional/serial/CacheCmd/cache/delete 0.08
87 TestFunctional/serial/MinikubeKubectlCmd 0.47
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
89 TestFunctional/serial/ExtraConfig 37.04
90 TestFunctional/serial/ComponentHealth 0.04
91 TestFunctional/serial/LogsCmd 0.68
92 TestFunctional/serial/LogsFileCmd 0.66
93 TestFunctional/serial/InvalidService 3.94
95 TestFunctional/parallel/ConfigCmd 0.24
96 TestFunctional/parallel/DashboardCmd 9.77
97 TestFunctional/parallel/DryRun 0.23
98 TestFunctional/parallel/InternationalLanguage 0.11
99 TestFunctional/parallel/StatusCmd 0.23
104 TestFunctional/parallel/AddonsCmd 0.12
105 TestFunctional/parallel/PersistentVolumeClaim 24.84
107 TestFunctional/parallel/SSHCmd 0.12
108 TestFunctional/parallel/CpCmd 0.41
110 TestFunctional/parallel/FileSync 0.07
111 TestFunctional/parallel/CertSync 0.39
115 TestFunctional/parallel/NodeLabels 0.04
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.11
119 TestFunctional/parallel/License 1.33
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.1
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.09
132 TestFunctional/parallel/ServiceCmd/List 0.28
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
135 TestFunctional/parallel/ServiceCmd/Format 0.1
136 TestFunctional/parallel/ServiceCmd/URL 0.1
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
138 TestFunctional/parallel/ProfileCmd/profile_list 0.15
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
140 TestFunctional/parallel/MountCmd/any-port 9.22
141 TestFunctional/parallel/MountCmd/specific-port 0.99
142 TestFunctional/parallel/MountCmd/VerifyCleanup 0.71
143 TestFunctional/parallel/Version/short 0.04
144 TestFunctional/parallel/Version/components 0.18
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
149 TestFunctional/parallel/ImageCommands/ImageBuild 5.94
150 TestFunctional/parallel/ImageCommands/Setup 5.5
151 TestFunctional/parallel/DockerEnv/bash 0.46
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
155 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.05
156 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.43
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.24
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
162 TestFunctional/delete_addon-resizer_images 0.11
163 TestFunctional/delete_my-image_image 0.04
164 TestFunctional/delete_minikube_cached_images 0.04
168 TestMutliControlPlane/serial/StartCluster 245.67
169 TestMutliControlPlane/serial/DeployApp 9.03
170 TestMutliControlPlane/serial/PingHostFromPods 0.78
171 TestMutliControlPlane/serial/AddWorkerNode 76.1
172 TestMutliControlPlane/serial/NodeLabels 0.12
173 TestMutliControlPlane/serial/HAppyAfterClusterStart 2.47
174 TestMutliControlPlane/serial/CopyFile 4.38
178 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 76.89
186 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.07
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 3.17
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.33
220 TestMainNoArgs 0.04
265 TestStoppedBinaryUpgrade/Setup 4.91
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.74
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
283 TestNoKubernetes/serial/ProfileList 0.15
284 TestNoKubernetes/serial/Stop 2.14
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.06
302 TestStartStop/group/old-k8s-version/serial/Stop 2.11
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
313 TestStartStop/group/no-preload/serial/Stop 3.8
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
324 TestStartStop/group/embed-certs/serial/Stop 2.05
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.93
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
342 TestStartStop/group/newest-cni/serial/Stop 3.1
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-752000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-752000: exit status 85 (94.118083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-752000 | jenkins | v1.32.0 | 11 Mar 24 03:34 PDT |          |
	|         | -p download-only-752000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 03:34:22
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 03:34:22.565619    1436 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:34:22.565751    1436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:34:22.565756    1436 out.go:304] Setting ErrFile to fd 2...
	I0311 03:34:22.565759    1436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:34:22.565885    1436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	W0311 03:34:22.565971    1436 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18350-986/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18350-986/.minikube/config/config.json: no such file or directory
	I0311 03:34:22.567165    1436 out.go:298] Setting JSON to true
	I0311 03:34:22.584561    1436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":234,"bootTime":1710153028,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 03:34:22.584622    1436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 03:34:22.589119    1436 out.go:97] [download-only-752000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 03:34:22.593112    1436 out.go:169] MINIKUBE_LOCATION=18350
	I0311 03:34:22.589264    1436 notify.go:220] Checking for updates...
	W0311 03:34:22.589273    1436 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball: no such file or directory
	I0311 03:34:22.602074    1436 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 03:34:22.606112    1436 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 03:34:22.610098    1436 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 03:34:22.613069    1436 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	W0311 03:34:22.619072    1436 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 03:34:22.619313    1436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 03:34:22.624064    1436 out.go:97] Using the qemu2 driver based on user configuration
	I0311 03:34:22.624085    1436 start.go:297] selected driver: qemu2
	I0311 03:34:22.624113    1436 start.go:901] validating driver "qemu2" against <nil>
	I0311 03:34:22.624170    1436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 03:34:22.628078    1436 out.go:169] Automatically selected the socket_vmnet network
	I0311 03:34:22.634716    1436 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0311 03:34:22.634813    1436 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 03:34:22.634888    1436 cni.go:84] Creating CNI manager for ""
	I0311 03:34:22.634906    1436 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 03:34:22.634950    1436 start.go:340] cluster config:
	{Name:download-only-752000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:34:22.640617    1436 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 03:34:22.645090    1436 out.go:97] Downloading VM boot image ...
	I0311 03:34:22.645103    1436 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0311 03:34:40.248378    1436 out.go:97] Starting "download-only-752000" primary control-plane node in "download-only-752000" cluster
	I0311 03:34:40.248398    1436 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 03:34:40.544704    1436 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 03:34:40.544760    1436 cache.go:56] Caching tarball of preloaded images
	I0311 03:34:40.545484    1436 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 03:34:40.551031    1436 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0311 03:34:40.551057    1436 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:34:41.156837    1436 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 03:35:01.530566    1436 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:35:01.530747    1436 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:35:02.228780    1436 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 03:35:02.228986    1436 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/download-only-752000/config.json ...
	I0311 03:35:02.229002    1436 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/download-only-752000/config.json: {Name:mk662d2b0a7da82438161412ea8665a1b408d5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:35:02.229219    1436 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 03:35:02.229406    1436 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0311 03:35:02.582303    1436 out.go:169] 
	W0311 03:35:02.587324    1436 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18350-986/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040 0x1050b3040] Decompressors:map[bz2:0x140008955c0 gz:0x140008955c8 tar:0x14000895570 tar.bz2:0x14000895580 tar.gz:0x14000895590 tar.xz:0x140008955a0 tar.zst:0x140008955b0 tbz2:0x14000895580 tgz:0x14000895590 txz:0x140008955a0 tzst:0x140008955b0 xz:0x140008955d0 zip:0x140008955e0 zst:0x140008955d8] Getters:map[file:0x1400222c560 http:0x140007c8230 https:0x140007c8280] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0311 03:35:02.587348    1436 out_reason.go:110] 
	W0311 03:35:02.595156    1436 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 03:35:02.599234    1436 out.go:169] 
	
	
	* The control-plane node download-only-752000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-752000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-752000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (21.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-266000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-266000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (21.876783875s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (21.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-266000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-266000: exit status 85 (77.154959ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-752000 | jenkins | v1.32.0 | 11 Mar 24 03:34 PDT |                     |
	|         | -p download-only-752000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| delete  | -p download-only-752000        | download-only-752000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| start   | -o=json --download-only        | download-only-266000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT |                     |
	|         | -p download-only-266000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 03:35:03
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 03:35:03.269379    1489 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:35:03.269491    1489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:35:03.269494    1489 out.go:304] Setting ErrFile to fd 2...
	I0311 03:35:03.269497    1489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:35:03.269624    1489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 03:35:03.270729    1489 out.go:298] Setting JSON to true
	I0311 03:35:03.286913    1489 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":275,"bootTime":1710153028,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 03:35:03.286980    1489 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 03:35:03.291271    1489 out.go:97] [download-only-266000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 03:35:03.295109    1489 out.go:169] MINIKUBE_LOCATION=18350
	I0311 03:35:03.291338    1489 notify.go:220] Checking for updates...
	I0311 03:35:03.302014    1489 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 03:35:03.305193    1489 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 03:35:03.308197    1489 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 03:35:03.311192    1489 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	W0311 03:35:03.317216    1489 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 03:35:03.317411    1489 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 03:35:03.321128    1489 out.go:97] Using the qemu2 driver based on user configuration
	I0311 03:35:03.321135    1489 start.go:297] selected driver: qemu2
	I0311 03:35:03.321147    1489 start.go:901] validating driver "qemu2" against <nil>
	I0311 03:35:03.321197    1489 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 03:35:03.324111    1489 out.go:169] Automatically selected the socket_vmnet network
	I0311 03:35:03.329328    1489 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0311 03:35:03.329413    1489 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 03:35:03.329453    1489 cni.go:84] Creating CNI manager for ""
	I0311 03:35:03.329461    1489 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 03:35:03.329473    1489 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 03:35:03.329516    1489 start.go:340] cluster config:
	{Name:download-only-266000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:35:03.333879    1489 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 03:35:03.337225    1489 out.go:97] Starting "download-only-266000" primary control-plane node in "download-only-266000" cluster
	I0311 03:35:03.337234    1489 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 03:35:03.990427    1489 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 03:35:03.990502    1489 cache.go:56] Caching tarball of preloaded images
	I0311 03:35:03.991289    1489 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 03:35:03.996865    1489 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0311 03:35:03.996895    1489 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:35:04.602279    1489 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 03:35:20.795259    1489 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:35:20.795433    1489 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:35:21.376747    1489 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 03:35:21.376936    1489 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/download-only-266000/config.json ...
	I0311 03:35:21.376951    1489 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/download-only-266000/config.json: {Name:mkfea597e0f71572bfc08cb95844f6dc02d3114e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 03:35:21.377173    1489 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 03:35:21.377289    1489 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-266000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-266000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-266000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (19.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (19.534046459s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (19.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-861000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-861000: exit status 85 (82.280208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-752000 | jenkins | v1.32.0 | 11 Mar 24 03:34 PDT |                     |
	|         | -p download-only-752000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| delete  | -p download-only-752000           | download-only-752000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| start   | -o=json --download-only           | download-only-266000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT |                     |
	|         | -p download-only-266000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| delete  | -p download-only-266000           | download-only-266000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT | 11 Mar 24 03:35 PDT |
	| start   | -o=json --download-only           | download-only-861000 | jenkins | v1.32.0 | 11 Mar 24 03:35 PDT |                     |
	|         | -p download-only-861000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 03:35:25
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 03:35:25.686976    1524 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:35:25.687116    1524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:35:25.687119    1524 out.go:304] Setting ErrFile to fd 2...
	I0311 03:35:25.687122    1524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:35:25.687252    1524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 03:35:25.688258    1524 out.go:298] Setting JSON to true
	I0311 03:35:25.704351    1524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":297,"bootTime":1710153028,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 03:35:25.704405    1524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 03:35:25.708190    1524 out.go:97] [download-only-861000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 03:35:25.713007    1524 out.go:169] MINIKUBE_LOCATION=18350
	I0311 03:35:25.708286    1524 notify.go:220] Checking for updates...
	I0311 03:35:25.720922    1524 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 03:35:25.724062    1524 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 03:35:25.727084    1524 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 03:35:25.730094    1524 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	W0311 03:35:25.736049    1524 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 03:35:25.736195    1524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 03:35:25.739025    1524 out.go:97] Using the qemu2 driver based on user configuration
	I0311 03:35:25.739036    1524 start.go:297] selected driver: qemu2
	I0311 03:35:25.739041    1524 start.go:901] validating driver "qemu2" against <nil>
	I0311 03:35:25.739090    1524 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 03:35:25.741971    1524 out.go:169] Automatically selected the socket_vmnet network
	I0311 03:35:25.747090    1524 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0311 03:35:25.747189    1524 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 03:35:25.747230    1524 cni.go:84] Creating CNI manager for ""
	I0311 03:35:25.747240    1524 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 03:35:25.747251    1524 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 03:35:25.747306    1524 start.go:340] cluster config:
	{Name:download-only-861000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:35:25.751621    1524 iso.go:125] acquiring lock: {Name:mkfb62d00e16e35f9166c0361e7e1b5a11aab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 03:35:25.755082    1524 out.go:97] Starting "download-only-861000" primary control-plane node in "download-only-861000" cluster
	I0311 03:35:25.755091    1524 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 03:35:26.417185    1524 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0311 03:35:26.417235    1524 cache.go:56] Caching tarball of preloaded images
	I0311 03:35:26.417988    1524 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 03:35:26.423537    1524 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0311 03:35:26.423569    1524 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0311 03:35:27.008592    1524 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18350-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-861000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-861000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-861000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.42s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-395000 --alsologtostderr --binary-mirror http://127.0.0.1:49330 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-395000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-395000
--- PASS: TestBinaryMirror (0.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-597000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-597000: exit status 85 (59.551459ms)

                                                
                                                
-- stdout --
	* Profile "addons-597000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-597000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-597000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-597000: exit status 85 (62.340583ms)

                                                
                                                
-- stdout --
	* Profile "addons-597000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-597000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (208.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-597000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-darwin-arm64 start -p addons-597000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m28.964124209s)
--- PASS: TestAddons/Setup (208.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 7.205958ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fb4xz" [ad331ad1-6a0f-4327-8b07-0646fb1a581c] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004862334s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8mg7f" [1c8aa6c3-cb12-49ed-90b8-55a73f146bf6] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004386583s
addons_test.go:340: (dbg) Run:  kubectl --context addons-597000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-597000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-597000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.613892875s)
addons_test.go:359: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 ip
2024/03/11 03:39:33 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gqfjm" [9f65bad0-f9f0-49d2-8d17-a25104bdf0f2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004133083s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-597000
addons_test.go:841: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-597000: (5.220252459s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.21825ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-jx5gp" [20084289-c494-4cb3-9522-626c08a8c482] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004412667s
addons_test.go:415: (dbg) Run:  kubectl --context addons-597000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 8.080292ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-597000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-597000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [262233cd-0d56-46c1-89e5-b210fbbbe88d] Pending
helpers_test.go:344: "task-pv-pod" [262233cd-0d56-46c1-89e5-b210fbbbe88d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [262233cd-0d56-46c1-89e5-b210fbbbe88d] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005096958s
addons_test.go:584: (dbg) Run:  kubectl --context addons-597000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-597000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-597000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-597000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-597000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-597000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-597000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [180cb8ab-adcf-4d6c-b80a-e5c71ff119dd] Pending
helpers_test.go:344: "task-pv-pod-restore" [180cb8ab-adcf-4d6c-b80a-e5c71ff119dd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [180cb8ab-adcf-4d6c-b80a-e5c71ff119dd] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0042645s
addons_test.go:626: (dbg) Run:  kubectl --context addons-597000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-597000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-597000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-arm64 -p addons-597000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.114330792s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-597000 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-f7nz8" [64bd6514-9048-41f4-b1b6-ee02c1bdc4c1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-f7nz8" [64bd6514-9048-41f4-b1b6-ee02c1bdc4c1] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004418583s
--- PASS: TestAddons/parallel/Headlamp (12.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-jr2ft" [28f467c2-9eae-475d-a3ec-16977ebe58ac] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00420325s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-597000
--- PASS: TestAddons/parallel/CloudSpanner (5.18s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.81s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-597000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-597000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-597000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [df46f00e-7391-4d35-a31e-fdcc823f00b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [df46f00e-7391-4d35-a31e-fdcc823f00b8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [df46f00e-7391-4d35-a31e-fdcc823f00b8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003930667s
addons_test.go:891: (dbg) Run:  kubectl --context addons-597000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 ssh "cat /opt/local-path-provisioner/pvc-3949e277-7901-4317-bdda-9cee2a039c24_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-597000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-597000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-arm64 -p addons-597000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-arm64 -p addons-597000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.356804125s)
--- PASS: TestAddons/parallel/LocalPath (51.81s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-74ckq" [71bbf97d-2702-49fb-9d63-7235962d84c3] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003967042s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-597000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-6949z" [7e612281-9458-40b3-a2cd-4fd0b14ea304] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.008748916s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-597000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-597000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-597000
addons_test.go:172: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-597000: (12.188398833s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-597000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-597000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-597000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.19s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18350
- KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2888690895/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
--- PASS: TestHyperKitDriverInstallOrUpdate (9.19s)

                                                
                                    
x
+
TestErrorSpam/setup (31.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-556000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-556000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 --driver=qemu2 : (31.38074075s)
--- PASS: TestErrorSpam/setup (31.38s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (64.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 stop: (12.209320375s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 stop: (26.029944167s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-556000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-556000 stop: (26.035008s)
--- PASS: TestErrorSpam/stop (64.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18350-986/.minikube/files/etc/test/nested/copy/1434/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-864000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-864000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.411524125s)
--- PASS: TestFunctional/serial/StartWithProxy (47.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-864000 --alsologtostderr -v=8
E0311 03:44:15.836148    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:15.844719    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:15.856928    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:15.879126    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:15.921279    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:16.003459    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:16.165700    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:16.487882    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:17.128827    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:18.410151    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:20.972190    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:26.094197    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:44:36.336142    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-864000 --alsologtostderr -v=8: (35.198951417s)
functional_test.go:659: soft start took 35.199329667s for "functional-864000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-864000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-864000 cache add registry.k8s.io/pause:3.1: (3.361373583s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-864000 cache add registry.k8s.io/pause:3.3: (3.297900541s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-864000 cache add registry.k8s.io/pause:latest: (2.251429417s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local4289169138/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 cache add minikube-local-cache-test:functional-864000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 cache delete minikube-local-cache-test:functional-864000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-864000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-864000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (68.477875ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-864000 cache reload: (1.898791375s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 kubectl -- --context functional-864000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-864000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-864000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0311 03:44:56.817131    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-864000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.0355375s)
functional_test.go:757: restart took 37.03564725s for "functional-864000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-864000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3358254410/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-864000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-864000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-864000: exit status 115 (103.1055ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32546 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-864000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-864000 config get cpus: exit status 14 (31.901041ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-864000 config get cpus: exit status 14 (34.094583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-864000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-864000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2395: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.77s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-864000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-864000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (117.783083ms)

                                                
                                                
-- stdout --
	* [functional-864000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 03:46:20.294238    2382 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:46:20.294374    2382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:46:20.294378    2382 out.go:304] Setting ErrFile to fd 2...
	I0311 03:46:20.294380    2382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:46:20.294496    2382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 03:46:20.295629    2382 out.go:298] Setting JSON to false
	I0311 03:46:20.312160    2382 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":952,"bootTime":1710153028,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 03:46:20.312248    2382 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 03:46:20.317437    2382 out.go:177] * [functional-864000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 03:46:20.324430    2382 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 03:46:20.324490    2382 notify.go:220] Checking for updates...
	I0311 03:46:20.331382    2382 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 03:46:20.334370    2382 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 03:46:20.337346    2382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 03:46:20.340369    2382 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 03:46:20.343333    2382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 03:46:20.346655    2382 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 03:46:20.346893    2382 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 03:46:20.351336    2382 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 03:46:20.358360    2382 start.go:297] selected driver: qemu2
	I0311 03:46:20.358368    2382 start.go:901] validating driver "qemu2" against &{Name:functional-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:46:20.358414    2382 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 03:46:20.364287    2382 out.go:177] 
	W0311 03:46:20.368402    2382 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0311 03:46:20.372213    2382 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-864000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-864000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-864000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.6615ms)

                                                
                                                
-- stdout --
	* [functional-864000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 03:46:20.177185    2378 out.go:291] Setting OutFile to fd 1 ...
	I0311 03:46:20.177298    2378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:46:20.177300    2378 out.go:304] Setting ErrFile to fd 2...
	I0311 03:46:20.177303    2378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 03:46:20.177427    2378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
	I0311 03:46:20.178912    2378 out.go:298] Setting JSON to false
	I0311 03:46:20.196227    2378 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":952,"bootTime":1710153028,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 03:46:20.196317    2378 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 03:46:20.201389    2378 out.go:177] * [functional-864000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0311 03:46:20.209423    2378 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 03:46:20.212387    2378 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	I0311 03:46:20.209451    2378 notify.go:220] Checking for updates...
	I0311 03:46:20.219270    2378 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 03:46:20.222361    2378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 03:46:20.225428    2378 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	I0311 03:46:20.226820    2378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 03:46:20.230608    2378 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 03:46:20.230879    2378 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 03:46:20.235359    2378 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0311 03:46:20.240352    2378 start.go:297] selected driver: qemu2
	I0311 03:46:20.240359    2378 start.go:901] validating driver "qemu2" against &{Name:functional-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 03:46:20.240398    2378 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 03:46:20.246366    2378 out.go:177] 
	W0311 03:46:20.250440    2378 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0311 03:46:20.254352    2378 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [10f65f55-90fc-4d56-8fad-4cf1a2266c10] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004001666s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-864000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-864000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-864000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-864000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ef79deb7-c5ab-4ea8-9cb1-ebe6647ee1ee] Pending
helpers_test.go:344: "sp-pod" [ef79deb7-c5ab-4ea8-9cb1-ebe6647ee1ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ef79deb7-c5ab-4ea8-9cb1-ebe6647ee1ee] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004058s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-864000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-864000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-864000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ce547581-7ad9-48b0-ae28-33399d2bb550] Pending
helpers_test.go:344: "sp-pod" [ce547581-7ad9-48b0-ae28-33399d2bb550] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ce547581-7ad9-48b0-ae28-33399d2bb550] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004126166s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-864000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh -n functional-864000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 cp functional-864000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd778596944/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh -n functional-864000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh -n functional-864000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1434/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo cat /etc/test/nested/copy/1434/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1434.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo cat /etc/ssl/certs/1434.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1434.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo cat /usr/share/ca-certificates/1434.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo cat /etc/ssl/certs/14342.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo cat /usr/share/ca-certificates/14342.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-864000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-864000 ssh "sudo systemctl is-active crio": exit status 1 (111.154417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.327868958s)
--- PASS: TestFunctional/parallel/License (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-864000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-864000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-864000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-864000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2231: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-864000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-864000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4ede3693-aa6a-46ea-ba86-bd3f4809ae8a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0311 03:45:37.777367    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [4ede3693-aa6a-46ea-ba86-bd3f4809ae8a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003566667s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-864000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.104.49 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-864000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-864000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-864000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-sb665" [51c78233-5012-49d5-93c1-d96157bd2e65] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-sb665" [51c78233-5012-49d5-93c1-d96157bd2e65] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003930833s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 service list -o json
functional_test.go:1490: Took "274.202208ms" to run "out/minikube-darwin-arm64 -p functional-864000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:32638
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:32638
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "109.681584ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "36.74925ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "108.304333ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "37.632709ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2511247930/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710153968998786000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2511247930/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710153968998786000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2511247930/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710153968998786000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2511247930/001/test-1710153968998786000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.00125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 11 10:46 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 11 10:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 11 10:46 test-1710153968998786000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh cat /mount-9p/test-1710153968998786000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-864000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4f5eab2b-7379-4a7a-89e3-3e46a1e308db] Pending
helpers_test.go:344: "busybox-mount" [4f5eab2b-7379-4a7a-89e3-3e46a1e308db] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4f5eab2b-7379-4a7a-89e3-3e46a1e308db] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4f5eab2b-7379-4a7a-89e3-3e46a1e308db] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004141667s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-864000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2511247930/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2924177558/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.718167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2924177558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-864000 ssh "sudo umount -f /mount-9p": exit status 1 (59.952083ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-864000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2924177558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3193259508/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3193259508/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3193259508/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T" /mount1: exit status 1 (71.405625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-864000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3193259508/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3193259508/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-864000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3193259508/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-864000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-864000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-864000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-864000 image ls --format short --alsologtostderr:
I0311 03:46:44.061326    2574 out.go:291] Setting OutFile to fd 1 ...
I0311 03:46:44.061502    2574 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.061507    2574 out.go:304] Setting ErrFile to fd 2...
I0311 03:46:44.061509    2574 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.061642    2574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
I0311 03:46:44.062085    2574 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.062153    2574 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.063085    2574 ssh_runner.go:195] Run: systemctl --version
I0311 03:46:44.063094    2574 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/functional-864000/id_rsa Username:docker}
I0311 03:46:44.085667    2574 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-864000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | be5e6f23a9904 | 43.6MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/library/minikube-local-cache-test | functional-864000 | 8ecc295720470 | 30B    |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-864000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/nginx                     | latest            | 760b7cbba31e1 | 192MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-864000 image ls --format table --alsologtostderr:
I0311 03:46:44.139417    2578 out.go:291] Setting OutFile to fd 1 ...
I0311 03:46:44.139550    2578 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.139554    2578 out.go:304] Setting ErrFile to fd 2...
I0311 03:46:44.139556    2578 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.139678    2578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
I0311 03:46:44.140120    2578 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.140181    2578 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.141052    2578 ssh_runner.go:195] Run: systemctl --version
I0311 03:46:44.141060    2578 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/functional-864000/id_rsa Username:docker}
I0311 03:46:44.161486    2578 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-864000 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43600000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"760b7cbba31e196288effd2af6924c42637a
c5e0d67db4de6309f24518844676","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-864000"],"size":"32900000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38
696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8ecc295720470e9f4db3e4482fceef7ebe25e4c1b3acd3de435c145a460a082a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-864000"],"size":"30"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:3.9"],"size":"514000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-864000 image ls --format json --alsologtostderr:
I0311 03:46:44.136763    2577 out.go:291] Setting OutFile to fd 1 ...
I0311 03:46:44.136906    2577 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.136909    2577 out.go:304] Setting ErrFile to fd 2...
I0311 03:46:44.136912    2577 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.137041    2577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
I0311 03:46:44.137446    2577 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.137506    2577 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.138406    2577 ssh_runner.go:195] Run: systemctl --version
I0311 03:46:44.138417    2577 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/functional-864000/id_rsa Username:docker}
I0311 03:46:44.159383    2577 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-864000 image ls --format yaml --alsologtostderr:
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43600000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-864000
size: "32900000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8ecc295720470e9f4db3e4482fceef7ebe25e4c1b3acd3de435c145a460a082a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-864000
size: "30"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-864000 image ls --format yaml --alsologtostderr:
I0311 03:46:44.061366    2573 out.go:291] Setting OutFile to fd 1 ...
I0311 03:46:44.061545    2573 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.061550    2573 out.go:304] Setting ErrFile to fd 2...
I0311 03:46:44.061552    2573 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.061679    2573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
I0311 03:46:44.062324    2573 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.062385    2573 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.063573    2573 ssh_runner.go:195] Run: systemctl --version
I0311 03:46:44.063581    2573 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/functional-864000/id_rsa Username:docker}
I0311 03:46:44.085197    2573 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-864000 ssh pgrep buildkitd: exit status 1 (55.941959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image build -t localhost/my-image:functional-864000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-864000 image build -t localhost/my-image:functional-864000 testdata/build --alsologtostderr: (5.810929083s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-864000 image build -t localhost/my-image:functional-864000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in d7c65cd09677
Removing intermediate container d7c65cd09677
---> e412244eb1b1
Step 3/3 : ADD content.txt /
---> 905836c915f5
Successfully built 905836c915f5
Successfully tagged localhost/my-image:functional-864000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-864000 image build -t localhost/my-image:functional-864000 testdata/build --alsologtostderr:
I0311 03:46:44.261851    2583 out.go:291] Setting OutFile to fd 1 ...
I0311 03:46:44.262056    2583 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.262060    2583 out.go:304] Setting ErrFile to fd 2...
I0311 03:46:44.262062    2583 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 03:46:44.262226    2583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18350-986/.minikube/bin
I0311 03:46:44.262630    2583 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.263464    2583 config.go:182] Loaded profile config "functional-864000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 03:46:44.264381    2583 ssh_runner.go:195] Run: systemctl --version
I0311 03:46:44.264391    2583 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18350-986/.minikube/machines/functional-864000/id_rsa Username:docker}
I0311 03:46:44.284765    2583 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3948934531.tar
I0311 03:46:44.284819    2583 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0311 03:46:44.288220    2583 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3948934531.tar
I0311 03:46:44.289615    2583 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3948934531.tar: stat -c "%s %y" /var/lib/minikube/build/build.3948934531.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3948934531.tar': No such file or directory
I0311 03:46:44.289632    2583 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3948934531.tar --> /var/lib/minikube/build/build.3948934531.tar (3072 bytes)
I0311 03:46:44.298595    2583 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3948934531
I0311 03:46:44.302541    2583 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3948934531 -xf /var/lib/minikube/build/build.3948934531.tar
I0311 03:46:44.305882    2583 docker.go:360] Building image: /var/lib/minikube/build/build.3948934531
I0311 03:46:44.305931    2583 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-864000 /var/lib/minikube/build/build.3948934531
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0311 03:46:50.026779    2583 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-864000 /var/lib/minikube/build/build.3948934531: (5.721101792s)
I0311 03:46:50.026845    2583 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3948934531
I0311 03:46:50.030489    2583 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3948934531.tar
I0311 03:46:50.033564    2583 build_images.go:207] Built localhost/my-image:functional-864000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3948934531.tar
I0311 03:46:50.033580    2583 build_images.go:123] succeeded building to: functional-864000
I0311 03:46:50.033583    2583 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/03/11 03:46:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.455518375s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-864000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.50s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-864000 docker-env) && out/minikube-darwin-arm64 status -p functional-864000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-864000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image load --daemon gcr.io/google-containers/addon-resizer:functional-864000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-864000 image load --daemon gcr.io/google-containers/addon-resizer:functional-864000 --alsologtostderr: (1.9838875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image load --daemon gcr.io/google-containers/addon-resizer:functional-864000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-864000 image load --daemon gcr.io/google-containers/addon-resizer:functional-864000 --alsologtostderr: (1.35588325s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.332571208s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-864000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image load --daemon gcr.io/google-containers/addon-resizer:functional-864000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-864000 image load --daemon gcr.io/google-containers/addon-resizer:functional-864000 --alsologtostderr: (1.795804958s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image save gcr.io/google-containers/addon-resizer:functional-864000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image rm gcr.io/google-containers/addon-resizer:functional-864000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-864000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-864000 image save --daemon gcr.io/google-containers/addon-resizer:functional-864000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-864000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-864000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-864000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-864000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (245.67s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-600000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0311 03:46:59.695659    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:49:15.875108    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:49:43.584263    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/addons-597000/client.crt: no such file or directory
E0311 03:50:35.788434    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:35.794778    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:35.806465    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:35.828605    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:35.870801    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:35.952948    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:36.115131    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:36.437217    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:37.078851    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:38.360939    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:40.923065    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:50:46.044990    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-600000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (4m5.480058916s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (245.67s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (9.03s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E0311 03:50:56.287405    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-600000 -- rollout status deployment/busybox: (7.328937292s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-6kfl7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-fs5k5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-zp9zh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-6kfl7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-fs5k5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-zp9zh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-6kfl7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-fs5k5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-zp9zh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (9.03s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (0.78s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-6kfl7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-6kfl7 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-fs5k5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-fs5k5 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-zp9zh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-600000 -- exec busybox-5b5d89c9d6-zp9zh -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (0.78s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (76.1s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-600000 -v=7 --alsologtostderr
E0311 03:51:16.768886    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
E0311 03:51:57.729588    1434 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18350-986/.minikube/profiles/functional-864000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-600000 -v=7 --alsologtostderr: (1m15.873407667s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (76.10s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-600000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (2.47s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2.464956667s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (2.47s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (4.38s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp testdata/cp-test.txt ha-600000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile170892433/001/cp-test_ha-600000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000:/home/docker/cp-test.txt ha-600000-m02:/home/docker/cp-test_ha-600000_ha-600000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m02 "sudo cat /home/docker/cp-test_ha-600000_ha-600000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000:/home/docker/cp-test.txt ha-600000-m03:/home/docker/cp-test_ha-600000_ha-600000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m03 "sudo cat /home/docker/cp-test_ha-600000_ha-600000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000:/home/docker/cp-test.txt ha-600000-m04:/home/docker/cp-test_ha-600000_ha-600000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m04 "sudo cat /home/docker/cp-test_ha-600000_ha-600000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp testdata/cp-test.txt ha-600000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile170892433/001/cp-test_ha-600000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m02:/home/docker/cp-test.txt ha-600000:/home/docker/cp-test_ha-600000-m02_ha-600000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000 "sudo cat /home/docker/cp-test_ha-600000-m02_ha-600000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m02:/home/docker/cp-test.txt ha-600000-m03:/home/docker/cp-test_ha-600000-m02_ha-600000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m03 "sudo cat /home/docker/cp-test_ha-600000-m02_ha-600000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m02:/home/docker/cp-test.txt ha-600000-m04:/home/docker/cp-test_ha-600000-m02_ha-600000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m04 "sudo cat /home/docker/cp-test_ha-600000-m02_ha-600000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp testdata/cp-test.txt ha-600000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile170892433/001/cp-test_ha-600000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m03:/home/docker/cp-test.txt ha-600000:/home/docker/cp-test_ha-600000-m03_ha-600000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000 "sudo cat /home/docker/cp-test_ha-600000-m03_ha-600000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m03:/home/docker/cp-test.txt ha-600000-m02:/home/docker/cp-test_ha-600000-m03_ha-600000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m02 "sudo cat /home/docker/cp-test_ha-600000-m03_ha-600000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m03:/home/docker/cp-test.txt ha-600000-m04:/home/docker/cp-test_ha-600000-m03_ha-600000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m04 "sudo cat /home/docker/cp-test_ha-600000-m03_ha-600000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp testdata/cp-test.txt ha-600000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile170892433/001/cp-test_ha-600000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m04:/home/docker/cp-test.txt ha-600000:/home/docker/cp-test_ha-600000-m04_ha-600000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000 "sudo cat /home/docker/cp-test_ha-600000-m04_ha-600000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m04:/home/docker/cp-test.txt ha-600000-m02:/home/docker/cp-test_ha-600000-m04_ha-600000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m02 "sudo cat /home/docker/cp-test_ha-600000-m04_ha-600000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 cp ha-600000-m04:/home/docker/cp-test.txt ha-600000-m03:/home/docker/cp-test_ha-600000-m04_ha-600000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-600000 ssh -n ha-600000-m03 "sudo cat /home/docker/cp-test_ha-600000-m04_ha-600000-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (4.38s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (76.89s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.891508167s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (76.89s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.17s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-607000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-607000 --output=json --user=testUser: (3.169521541s)
--- PASS: TestJSONOutput/stop/Command (3.17s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-916000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-916000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.109792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"26599226-623a-4de2-810b-11bbb77da457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-916000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d00cc01a-a1b0-430c-abf6-2c92329afa88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18350"}}
	{"specversion":"1.0","id":"0b075fcc-f49f-40cb-800b-8b2ff27c25c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig"}}
	{"specversion":"1.0","id":"c0cbebe7-cc9f-40f8-9e89-82afe202d7a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"06038146-c59a-4fe3-9bfb-eb6e6f66afbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b84e36d-88fd-4583-bef8-7c5e3961af78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube"}}
	{"specversion":"1.0","id":"4bd3adb8-5ed8-4c9a-9237-dbbd63c6ab01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b2db194e-3e5b-4cae-adc7-11a002e3643d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-916000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-916000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-629000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-886000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-886000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.808625ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-886000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18350-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18350-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-886000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-886000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (50.935625ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-886000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-886000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-886000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-886000: (2.140257291s)
--- PASS: TestNoKubernetes/serial/Stop (2.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-886000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-886000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (61.006958ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-886000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-886000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-749000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-749000 --alsologtostderr -v=3: (2.108206125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-749000 -n old-k8s-version-749000: exit status 7 (57.713542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-749000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-114000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-114000 --alsologtostderr -v=3: (3.795564084s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-114000 -n no-preload-114000: exit status 7 (57.966208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-114000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-636000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-636000 --alsologtostderr -v=3: (2.049660625s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-636000 -n embed-certs-636000: exit status 7 (59.022167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-636000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-735000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-735000 --alsologtostderr -v=3: (1.934587292s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-735000 -n default-k8s-diff-port-735000: exit status 7 (61.942125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-735000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-306000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-306000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-306000 --alsologtostderr -v=3: (3.096203583s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-306000 -n newest-cni-306000: exit status 7 (56.8275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-306000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/281)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-896000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-896000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-896000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-896000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896000"

                                                
                                                
----------------------- debugLogs end: cilium-896000 [took: 2.26251925s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-896000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-896000
--- SKIP: TestNetworkPlugins/group/cilium (2.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-598000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-598000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard